=Paper=
{{Paper
|id=Vol-3248/paper17
|storemode=property
|title=An Easily Identifiable Emission Waveform Design in Visible Light Positioning
|pdfUrl=https://ceur-ws.org/Vol-3248/paper17.pdf
|volume=Vol-3248
|authors=Qing Wang,Deyue Zou
|dblpUrl=https://dblp.org/rec/conf/ipin/WangZ22
}}
==An Easily Identifiable Emission Waveform Design in Visible Light Positioning==
An Easily Identifiable Emission Waveform Design in Visible Light Positioning Qing Wang1 , Deyue Zou1,* 1 School of Information and Communication Engineering, Dalian University of Technology, Dalian, China Abstract Visible light positioning (VLP) based on an image sensor uses light sources to emit ID (Identity Document) signals containing the position of the light sources and uses a mobile phone as the receiver. The stripes images carrying ID information are acquired by the rolling shutter effect of the Complementary Metal Oxide Semiconductor (CMOS) image sensor. And then using the geometric relationship between the object point and the image point of the light sources to complete the positioning. Quantization error is inevitable when obtaining the imaging coordinates of fringe images. Based on this, we design an easily identifiable light source signal mechanism, which consists of two parts: the information sequence and the all-bright sequence. We use different methods to get the imaging coordinates of the light source and fit the deviation between the ideal imaging coordinates and the identified imaging coordinates, the fitting results show that the imaging deviations obey Gaussian distribution. And this paper simulates positionings on the basis of considering the imaging deviation, the positioning results show that the positioning error can be controlled within 1.3cm effectively under the signal mechanism and identification method we proposed. Keywords Visible light positioning, image sensor, Gaussian distribution, image point coordinates 1. Introduction The outdoor positioning systems such as Global Positioning System (GPS) and BeiDou Naviga- tion Satellite System (BDS) canβt be positioned accurately due to the limited indoor complex environment. In large shopping malls, museums, subway stations, etc. indoor position-based ser- vices have become an urgent need[1]. In recent years, technologies based on indoor positioning have emerged one after another, such as ZigBee[2], Wireless Local Area Network (WLAN)[3], Bluetooth[4], Ultra Wide Band (UWB)[5], and Radio Frequency Identification (RFID)[6]. While they are not widely used due to their inherent flaws including low positioning accuracy and expensive cost of the devices. With the development of LED (light-emitting diode) light source, visible light positioning (VLP) has become a research hotspot[7]. As an alternative to those techniques, VLP uses visible light sources as beacons for position estimation. Meanwhile, it can use existing lighting infrastructure with minimal modifications, which greatly reduces deployment costs[8, 9, 10]. IPIN 2022 WiP Proceedings, September 5 - 7, 2022, Beijing, China * Corresponding author. " hlgwangqing@163.com (Q. Wang); zoudeyue@dlut.edu.cn (D. Zou) Β© 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings http://ceur-ws.org ISSN 1613-0073 CEUR Workshop Proceedings (CEUR-WS.org) The VLP positioning system is mainly composed of a receiver and a transmitter. The trans- mitter is generally light source, and the receiver is distinguished by a Photo-Diode (PD) or an Image-Sensor (IS)[11]. PD is sensitive to the beam direction, so it greatly limits the mobility of the positioning terminal. And PD-based positioning requires extremely high angle measurement and received signal strength measurement, which will lead to large positioning errors[12]. In contrast, IS is more widely incorporated into various mobile devices. Thus itβs can complete high-accuracy positioning without additional equipment just using a mobile phone camera[13]. In VLP based on IS, we encode the actual position of the light sources and load them onto the light sources to emit in the form of high-frequency flashing The flicker frequency of the modulated light sources reaches kHz, CMOS image sensor can detect the change and image them as light and dark stripes images[14]. Identifying the images, decoding the ID information of light sources, and then calculating the imaging coordinates of the light sources (π’πβ² , π£πβ² ). By querying the pre-established ID database, matching the three-dimensional(3-D) position coordinates of the light sources (ππ , ππ , ππ )[15]. The position of the IS can be calculated through the geometric relationship between the object point coordinates and the image point coordinates, that is, positioning is achieved. While calculating the coordinates of the light sources on the imaging plane, there will be inevitable quantization errors which will lead to positioning deviations due to the variety of stripes images. Therefore, this paper proposes an easily identifiable signaling mechanism as the emission signal of light sources and proposes two identification algorithms based on this signal mechanism. By fitting the imaging deviation of the light sources obtained by actual measurement, it can be found that the imaging deviations conform to the Gaussian distribution. After taking the imaging deviation into account, we test positionings using the obtained data and the results show that the positioning accuracy of the algorithm proposed in this paper is better than the positioning accuracy of direct identification. 2. Single camera positioning algorithm The VLP based on IS is to get the position and attitude information of IS. To simplify the experiments, we only discuss the case where IS is fixed at the same height. Fig. 1 shows a typical indoor positioning system. We installed 4 LEDs on the ceiling of the room and the LEDs were not in a straight line. Each LED transmits its unique ID signal representing its own 3-D coordinate information to the optical channel through OOK modulation. We use the mobile phone as the receiver to capture images of light sources. Then detecting the ID and image coordinates of the LEDs. After obtaining the above information, we use the following algorithm to complete the position calculation of the receiver π = (ππ , ππ , ππ ). Let the 3-D positions of LEDs be (ππ , ππ , ππ ), π = 1, 2, 3, 4, where π are the numbers of the light sources. The height of the mobile phone from the ceiling is π» and the distance from each LED to the lens(π ) is denoted as π 1 , π 2 , π 3 , π 4 . They can be expressed as (1)-(2): (ππ β ππ )2 + (ππ β ππ )2 + (ππ β ππ )2 = π π 2 , π = 1, 2, 3, 4 (1) ππ = π» + ππ (2) After imaging by the IS, we can get the position coordinates (π’πβ² , π£πβ² ), πβ² = 1, 2, 3, 4 of each LED on the imaging plane, where πβ² are the numbers of the imaging light sources. Then we can LED1 D1 LED2 D2 LED4 D4 LED3 D3 H R1 R3 R4 R2 P f Q d1' (u1' , v1' ) Z Y X Figure 1: Visible light positioning system. calculate the distance (ππβ² and ππβ² ) from each image point to the image center π = (π’0 , π£0 ) and to the optical axis of the lens, as shown in (3)-(4): βοΈ ππβ² = (π’πβ² β π’0 )2 + (π£πβ² β π£0 )2 , πβ² = 1, 2, 3, 4 (3) βοΈ ππβ² = π 2 + π2πβ² , πβ² = 1, 2, 3, 4 (4) According to the triangle similarity principle, the distance from each LED to the lens can be calculated as shown in (5): π π π» π» = β π π = Β· ππβ² (5) ππβ² π π where π is the focal length of the camera, which is known. In a result, (1) can be can be rewritten as (6): π2β² (ππ β ππ )2 + (ππ β ππ )2 = π» 2 Β· ( π2 β 1) (6) π Therefore, the receiverβs coordinate of π that is (6) can be achieved by resolving Least Squares Method. β1 [ππ ; ππ ] = (π π π ) ππ π· (7) where π and π· are shown as (8)-(9). β‘ β€ π2 β π1 π2 β π1 π = 2 β£π3 β π1 π3 β π1 β¦ , (8) π4 β π1 π4 β π1 π2 β‘ β€ (π2 2 β π1 2 ) + (π2 2 β π1 2 ) + π» 2 Β· ( π12β² β 1) β’ β₯ β’ 2 β₯ π π· = β’(π3 2 β π1 2 ) + (π3 2 β π1 2 ) + π» 2 Β· ( π22β² β 1)β₯ (9) β’ β₯ β’ β₯ β£ 2 β¦ π (π4 2 β π1 2 ) + (π4 2 β π1 2 ) + π» 2 Β· ( π32β² β 1) 3. The proposed identification method 3.1. Transmit signal design To obtain the coordinates of the light sources image point in the positioning algorithm proposed in section 2, we need to identify the captured light source image. When obtaining the imaging coordinates of the light source, the imaged stripes images are not all complete circular contours due to the high scanning frequency of the camera, as shown in Fig. 2, which causes inevitable quantization errors and affects subsequent positioning. Therefore, we propose a design method for light source emission signal which is easy to identify. (a) (b) (c) (d) Figure 2: Different LED stripes images captured by CMOS sensor. Under the premise of satisfying indoor lighting, we choose every 4 light sources as a set of positioning light source groups. The positions of those 4 light sources are not on the same straight line. The emission data of each light source consists of two parts: the information sequence ππ and the all-bright sequence π. The information sequence ππ is the ID information that matches the actual 3-D position of the light source. Due to the great autocorrelation characteristics of the pseudo-random code, we might as well choose the pseudo-code as the ID of the light source. The all-bright sequence π, as the name implies, is used to identify the image point of the light source and the ID demodulation. The transmitter uses OOK modulation to load the 3-D position of the light sources to the light sources at a frequency of 3kHz. Due to the high flicker frequency, the human eye cannot perceive this flickering. This not only ensures the lighting function of the light source but also completes the work of the positioning transmitting end. To verify the superiority of the signal proposed in this paper, we discuss the situation of 4 indoor light sources that the requirements for the number of IDs are not high. Therefore, a Pseudo-noise Sequence πππ = [ππ1 , ππ2 , ..., πππ1 ], π = 1, 2, 3, 4 (including but not limited to shift) with a code length of π1 = 7 and duration of π4 is selected to be assigned to each light source. To ensure that the receiving step can identify the light source quickly and accurately, adding two all-bright sequence ππ = [π 1π , π 2π , ...π π π ] of time 4 and code length π2 = 7 for each light 2 π source. Since the light source is grounded at the cathode, here the π π π represents the binary all 2 β1β symbol. To collect the imaging of the light source information emission sequence and the all-bright sequence simultaneously within π , the all-bright sequence needs to be shifted, that is, there is always a π4 delay between the all-bright sequences of the light source, as shown in Fig. 3. Therefore, the total emission sequence of the light source groups in π are shown as (10), where βββ represents the Kronecker product, and βββ represents the modulo-two addition. The light source groups broadcast the total emission sequence cyclically with π as the period, and π /4 is the exposure time of the receiver. LED1 LED 2 LED3 LED 4 T T T T 4 4 4 4 t Figure 3: The modulated signal waveform. π12 , ... π1π1 ) β ( 1 π 21 , ... π π [οΈ ]οΈ [οΈ ]οΈ [οΈ ]οΈ [οΈ ]οΈ πΆπΏπΈπ·1 = ( 0 0 1 1 β π11 , 1 0 0 β π 11 , 1 ) 2 π22 , ... π2π1 ) β ( 0 π 22 , ... π π [οΈ ]οΈ [οΈ ]οΈ [οΈ ]οΈ [οΈ ]οΈ πΆπΏπΈπ·2 = ( 1 0 0 1 β π21 , 1 1 0 β π 12 , 2 ) 2 π32 , ... π3π1 ) β ( 0 π 23 , ... π π [οΈ ]οΈ [οΈ ]οΈ [οΈ ]οΈ [οΈ ]οΈ πΆπΏπΈπ·3 = ( 1 1 0 0 β π31 , 0 1 1 β π 13 , 3 ) 2 π42 , ... π4π1 ) β ( 1 π 24 , ... π π [οΈ ]οΈ [οΈ ]οΈ [οΈ ]οΈ [οΈ ]οΈ πΆπΏπΈπ·4 = ( 0 1 1 0 β π41 , 0 0 1 β π 14 , 4 ) 2 (10) In addition, the number of light sources is more than 4 or even more in the actual environment, so π1 and π2 can change the sequence length according to the number of light sources in the (1) (2) (1) (2) (3) (4) (3) (4) (a) (b) Figure 4: (a) Partial identification results calculated by algorithm 1. (b) Partial identification results calculated by algorithm 2. actual environment, so as to ensure that there are enough IDs for positioning. 3.2. LED light source identification Selecting the IS mounted on the mobile phone as the receiver, and aligning the light source array within time π to obtain 4 photos πΊπ , π = 1, 2, 3, 4 continuously, where π are the numbers of the photos. In this step, the acquired color images are transformed into grayscale images and binary images at first. Then we segment the binary images to process each light β² source in the captured images. Finally, we obtain the processed image πΊπ and pixel matrices π Γπ , π, π = 1, 2, 3, 4, where (π₯, π¦) is image coordinates of pixels, N is the number of π΅π π (π₯, π¦) pixels in row (ie horizontally), M is the number of pixels in column (ie vertical). After completing the above steps, using the identification algorithm to extract the centroid coordinates of the light sources. The traditional identification algorithm is to find the upper, lower, left, and right edges of the light source fringe image respectively, and take the midpoint as the imaging coordinate, or use the fitting method to fit the circular edge of the light source and take the center of the circle as the imaging coordinate. The above identification methods all have errors as shown in Fig. 2. Based on the description in section 3.1, we propose two identification methods here as shown in Table 1 and Table 2: (1)Algorithm 1: image method; (2)Algorithm 2: fitting method. β² Detecting the area of each light source in the image πΊπ and calculating its two-dimensional pixel coordinates. Taking light LED1 as an example, the main processes of these two methods are described in algorithm 1 and algorithm 2. And some identification results are shown in Fig. 4. Table 1 The process of image method Algorithm 1: Image method Input: LED1 images Output: LED1 centroid coordinates (π’1β² , π£1β² ) 1. Traverse the column pixels of any one π΅π 1 (π₯, π¦)π Γπ , π = 1, 2, 3, 4 2. Record the column ππ1 whose pixel value is not 0 for the first time and the column ππ2 whose pixel value is 0 again; 3. Traverse the rows pixels of π΅π 1 (π₯, π¦)π Γπ , π = 1, 2, 3, 4 selected in step 1; 4. Record the row ππ1 whose pixel value is not 0 for the first time and the row ππ2 whose pixel value is 0 again; 5. Calculate the pixel value: ππ2 βοΈ πΌ= 1 π΅π (π₯, π¦)|π¦= ππ1 +ππ2 (11) 2 π₯=ππ1 6. Take the centroid coordinates ( ππ1 +π 2 , 2 ) of the light source of the image where πΌmax π2 ππ1 +ππ2 is located as the imaging coordinates of LED1 (π’1β² , π£1β² ). 4. Experiment and analysis Considering the strictness of synchronization and to simplify the experiment, we use the smartphone to capture video images at a frame rate of 24fps and convert the video images into frame images. Then use Matlab to process these images. All the key system parameters adopted are provided in Table 3. To verify our proposed methods, we use the camera of a smartphone(iphone12) to capture the light source video carrying the information sequence described in section 3.2 at the test point (25,25,10). Under the condition that the height of the mobile phone remains unchanged at the test point, the posture of the mobile phone can be adjusted, and the LED can be imaged as a circular spot as much as possible. Then converting the video to 12,000 light source images, and using the two algorithms proposed in 3.2 to calculate the LEDs centroid coordinates every 4 images in sequence. Under the high shutter speed camera, the background is completely black, which has almost no effect on the light source, but it still introduces some noise interference inevitably. The several coordinate points with large deviations in the upper left corner in Fig. 5 are caused by noise interference. There is a large coordinate deviation because of the interference of light spots in individual light source frames. Therefore, we use the 3π criterion here to remove individual noisy data. As shown in the enlarged part of Fig. 5, after removing the noise data, the number of samples is 3000, and it can be seen that these 4 methods all deviate from the ideal imaging coordinate. Table 2 The process of fitting method Algorithm 2: Fitting method Input: LED1 images Output: LED1 centroid coordinates (π’1β² , π£1β² ) 1. Same as steps 1β5 of algorithm 1; 2. Add and subtract a range threshold 10pix respectively to the boundary value [ππ1 , ππ2 ][ππ1 , ππ2 ] of the light source of the image where πΌmax is located, and set the pixel values outside the [ππ1 : ππ1 + 10, ππ2 : ππ2 β 10][ππ1 : ππ1 + 10, ππ2 : ππ2 β 10] to 0; 3. Use the Sobel operator to extract the edge points (π₯πππ‘π, π¦πππ‘π) of the light source processed in step 2; 4. Find the circle 2 2 2 πΉ = @(π)(π₯πππ‘π β π(1)) + (π¦πππ‘π β π(2)) β π(3) (12) that matches the edge points in step 3 by least squares fitting and find the centre of the circle (π(1), π(2)) as the imaging coordinates of LED1 (π’1β² , π£1β² ). Fig. 5 compares the centroid coordinates of LED1 obtained by 4 methods, where the black upper triangle is the imaging centroid coordinates of LED1 obtained by identifying and cal- culating the stripes image directly using the fitting method without using the 3.1 signal, the blue lower triangle is the imaging centroid coordinates of LED1 obtained by identifying and calculating the stripes image directly using the image method without using the 3.1 signal, the yellow star is the imaging centroid coordinates obtained by 3.2 algorithm 1 after loading the 3.1 signal mechanism, the red square is the image centroid coordinates obtained by 3.2 algorithm 2 after loading the 3.1 signal mechanism. 4.1. Identification error In order to see the errors of different identification methods more clearly, our paper chooses the Root Mean Square Error (RMSE) to calculate the identification error: βοΈ πππππ = (π’πβ² β π’π‘β² )2 + (π£πβ² β π£π‘β² )2 (13) where, (π’πβ² , π£πβ² ) is the imaging coordinates of LEDi measured by the above 4 methods at the test point, (π’π‘β² , π£π‘β² ) is the ideal imaging coordinates of the test point. Cumulative distribution function (CDF) is the integral of the probability density function, which can describe the probability distribution of identification error. Taking LED1 as an example, the error cumulative distribution function of different identification methods are shown in Fig. 6 and Fig. 7. Fig. 6 is the deviation diagram of the centroid coordinates fringe Table 3 Parameter of the experiment Parameter Value Test space unit size (πΏ Γ π Γ π») 50 Γ 50 Γ 50ππ3 LED diameter 2.2cm LED1 position (10, 40, 50) LED2 position (40, 40, 50) LED3 position (40, 10, 50) LED4 position (10, 10, 50) Frenquency of LED transmitters 3kHz Camera height 10cm Camera resolution 1920Γ1080 ISO 100 Exposure time 1/3000s Focal length of the lens π 26mm Image Sensor SonyIMX686 Test point (πΏ, π, π») (25, 25, 10) 900 Fringe coordinates 1 800 Fringe coordinates 2 Algorithm 1 700 Algorithm 2 the ideal imaging coordinate Column pixel value (pixel) 600 500 710 400 700 300 690 200 680 100 670 0 535 540 545 550 555 β100 400 450 500 550 600 650 700 750 800 850 900 Cow pixel value (pixel) Figure 5: The centroid coordinates of LED1 obtained using different methods. images identified directly and the ideal imaging coordinates. Fig. 7 is the deviation diagram of the centroid coordinates using the algorithms proposed in 3.2 and the ideal imaging coordinates. 1 0.9 0.8 The CDF of identification error 0.7 0.6 0.5 0.4 the deviation of fitting directly 0.3 the deviation of image processing directly 0.2 0.1 0 0 10 20 30 40 50 60 70 Identification error (pixel) Figure 6: The coordinate deviation of fringe images. 1 0.9 0.8 The CDF of identification error 0.7 0.6 0.5 the deviation of algorithm 1 the deviation of algorithm 2 0.4 0.3 0.2 0.1 0 0 0.5 1 1.5 2 2.5 3 Identification error (pixel) Figure 7: The coordinate deviation of algorithm 1 and algorithm 2. It can be seen that the coordinate error of the light source identified by fitting the stripes image directly is 0pix-50pix, and there is a 57% chance that the error exceeds 10pix. The error of the image method is 0pix-60pix, and the probability of exceeding 10pix is 68%. They all have large errors. The error of using the signal mechanisms proposed by 3.1 to identify the full bright spot are all about 0-3pix. Among them, algorithm 2 is to calculate the pixels of images, and the pixel coordinates are integers, so there are many repetitions of error values. Fitting the coordinate deviation distribution obtained by the above four methods. Fig. 8 is the abscissa deviation distribution fitted according to the histogram of deviation coordinate difference using 4 methods, and Fig. 9 is the ordinate deviation distribution fitted according to the histogram of deviation coordinate difference using 4 methods. It can be seen that the deviation of the actual imaging coordinates of the light source is in line with the Gaussian distribution as shown in (14). The parameters are shown in Table 4. Table 4 Gaussian distribution parameter Figure π¦0 π΄0 π₯π π€ Fig. 8(a) 0.00128 0.1575 -0.90724 2.50698 Fig. 9(a) 0.00413 0.17473 1.15132 10.66326 Fig. 8(b) 0.00418 0.12752 -0.62293 1.45571 Fig. 9(b) 0.00119 0.10844 4.80804 17.96898 Fig. 8(c) 0.00144 0.18235 -0.10493 0.21353 Fig. 9(c) 0.00082384 0.68534 0.43672 0.28756 Fig. 8(d) -0.00245 0.16772 0.16288 0.24843 Fig. 9(d) 0.01512 0.54942 0.48163 0.11644 200 200 150 150 100 100 50 50 0 0 β10 β5 0 5 10 β10 β5 0 5 (a) (b) 200 800 150 600 100 400 50 200 0 0 β1.5 β1 β0.5 0 0.5 1 β1 β0.5 0 0.5 1 (c) (d) Figure 8: The abscissa deviation value. 300 200 150 200 100 100 50 0 0 β40 β20 0 20 40 β50 0 50 (a) (b) 1500 1500 1000 1000 500 500 0 0 β2 β1 0 1 2 3 β2 β1 0 1 (c) (d) Figure 9: The ordinate deviation value. (π₯βπ₯π )2 π¦ = π¦0 + π΄0 πβ 2π€2 (14) 4.2. Positioning error In our positioning system, the source of the positioning error is mainly the quantization error caused by the image identification of the LED described in section 4.1, so the ideal imaging coordinates of the light source need to be rewritten as (15). (οΈ )οΈ (οΈ )οΈ (οΈ )οΈ π’πβ² π’π‘β² πππ₯ = + (15) π£πβ² π£π‘β² πππ¦ where πππ₯ (π₯π , π€2 ) and πππ₯ (π₯π , π€2 ) are Gaussian white noises that are independent of each other on the horizontal and vertical axes of the light source on the imaging plane, the means and standard deviations are shown in Table 4. Considering the identification error, we do positioning tests on the data obtained by each method at the test point. Then we calculated the positioning error which is the RMSE of the estimated position results and test point. The statistical result is shown in Fig. 10. It can be seen that algorithm 1 and algorithm 2 perform better than the direct solution. More than 90% of the positioning error is within 1.69cm when using the fitted stripes images directly. More than 90% of the positioning error is within 1.77 cm when using the stripes coordinates identified by the image method directly. While more than 90% of the positioning error is within 1.26cm when using algorithm 1, and the positioning error of algorithm 2 is within 1.25cm. So the two algorithms proposed in the paper all have high accuracy. 1 0.9 0.9 X: 1.251 X: 1.694 X: 1.771 Y: 0.9 Y: 0.9 Y: 0.9 0.88 0.8 0.65 0.7 0.75 0.95 The CDF of positioning error 0.7 0.9 0.6 0.85 0.5 0.8 0.4 0.85 1.1 1.2 1.3 1.4 1.5 1.6 1.7 0.3 0.8 Fringe1 positioning error 0.2 Fringe20.75 positioning error Algorithm 1 positioning error 0.1 Algorithm 2 positioning error 0.7 0.5 0.6 0.7 0.8 0.9 1 1.1 0 0 0.5 1 1.5 2 2.5 3 3.5 2βD Positioning error (cm) Figure 10: The cumulative distribution function (CDF) of positioning error. 5. Conclusion Indoor visible light position based on a single camera will be a large error when calculating fringe coordinates directly as the imaging centroid coordinates of light sources. In this paper, we proposed a light source emission signal mechanism combining information sequence and all- bright sequence, we compared and fitted the imaging deviations of different methods. The results show that the imaging deviations obey the Gaussian distribution. We simulated positionings after taking the identification error into account. The results show that the two identification methods based on this signal mechanism can reduce the positioning error to within 1.3cm. The positioning accuracy is greatly improved compared with the positioning method of identifying the light source directly. This paper tests multiple positioning at the test point, and the statistical results show the superiority of the signal proposed in this paper in positioning accuracy. In future work, we consider transplanting the positioning system into the real environment and consider the influence of some non-positioning light sources (such as sunlight) in the real environment to complete random and dynamic positioning. 6. Acknowledgments This research was supported by The National Natural Science Foundation of China (62171075). References [1] C. Qiu, M. W. Mutka, Crisp: cooperation among smartphones to improve indoor position information, Wireless Networks 24 (2018) 867 β 884. URL: http://dx.doi.org/10.1007/ s11276-016-1373-1. [2] M. Smieja, Zigbee phase shift measurement approach to mobile inspection robot indoor positioning techniques, Diagnostyka 19 (2018) 101 β 107. URL: http://dx.doi.org/10.29354/ diag/94498. [3] M. D. Redzic, C. Laoudias, I. Kyriakides, Image and wlan bimodal integration for indoor user localization, IEEE Transactions on Mobile Computing 19 (2020) 1109 β 1122. URL: http://dx.doi.org/10.1109/TMC.2019.2903044. [4] L. Pei, J. Liu, R. Guinness, Y. Chen, T. KrΓΆger, R. Chen, L. Chen, The evaluation of wifi positioning in a bluetooth and wifi coexistence environment, in: 2012 Ubiquitous Positioning, Indoor Navigation, and Location Based Service (UPINLBS), 2012, pp. 1β6. doi:10.1109/UPINLBS.2012.6409768. [5] K. Paszek, D. Grzechca, A. Becker, Design of the uwb positioning system simulator for los/nlos environments, Sensors 21 (2021). doi:10.3390/s21144757. [6] Y.-F. Hsu, C.-S. Cheng, W.-C. Chu, Compass: An active rfid-based real-time indoor positioning system, Human-centric Computing and Information Sciences 12 (2022) 17921β 17942. URL: http://dx.doi.org/10.22967/HCIS.2022.12.007. [7] H. Cheng, C. Xiao, Y. Ji, J. Ni, T. Wang, A single led visible light positioning system based on geometric features and cmos camera, IEEE Photonics Technology Letters 32 (2020) 1097β1100. doi:10.1109/LPT.2020.3012476. [8] Y. Zhuang, L. Hua, L. Qi, J. Yang, P. Cao, Y. Cao, Y. Wu, J. Thompson, H. Haas, A survey of positioning systems using visible led lights, IEEE Communications Surveys Tutorials 20 (2018) 1963β1988. doi:10.1109/COMST.2018.2806558. [9] A. Raza, L. Lolic, S. Akhter, M. Liut, Comparing and evaluating indoor positioning techniques, in: 2021 International Conference on Indoor Positioning and Indoor Navigation (IPIN), 2021, pp. 1β8. doi:10.1109/IPIN51156.2021.9662632. [10] K. Abe, T. Sato, H. Watanabe, H. Hashizume, M. Sugimoto, Smartphone positioning using an ambient light sensor and reflected visible light, in: 2021 International Conference on Indoor Positioning and Indoor Navigation (IPIN), 2021, pp. 1β8. doi:10.1109/IPIN51156. 2021.9662520. [11] J. Jin, L. Feng, J. Wang, D. Chen, H. Lu, Signature codes in visible light positioning, IEEE Wireless Communications 28 (2021) 178 β 184. URL: http://dx.doi.org/10.1109/MWC.001. 2000540. [12] W. Guan, S. Wen, H. Zhang, L. Liu, A novel three-dimensional indoor localization algorithm based on visual visible light communication using single led, in: 2018 IEEE International Conference on Automation, Electronics and Electrical Engineering (AUTEEE), 2018, pp. 202β208. doi:10.1109/AUTEEE.2018.8720798. [13] H. Li, H. Huang, Y. Xu, Z. Wei, S. Yuan, P. Lin, H. Wu, W. Lei, J. Fang, Z. Chen, A fast and high-accuracy real-time visible light positioning system based on single led lamp with a beacon, IEEE Photonics Journal 12 (2020) 1β12. doi:10.1109/JPHOT.2020.3032448. [14] W. Guan, L. Huang, B. Hussain, C. P. Yue, Robust robotic localization using visible light positioning and inertial fusion, IEEE Sensors Journal 22 (2022) 4882β4892. doi:10.1109/ JSEN.2021.3053342. [15] Y. Wu, X. Liu, W. Guan, B. Chen, X. Chen, C. Xie, High-speed 3d indoor localization system based on visible light communication using differential evolution algorithm, Op- tics Communications 424 (2018) 177β189. doi:https://doi.org/10.1016/j.optcom. 2018.04.062.