<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>High-Precision Camera-enabled Visible Light Positioning System with Enhanced LED Recognition</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Bugao Liu</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dingyu Ge</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Zhigang Ma</string-name>
          <email>mazhigang@ncu.edu.cn</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Xiaodong Liu</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Zhenghai Wang</string-name>
          <email>zhenghai.wang@ncu.edu.cn</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Xun Zhang</string-name>
          <email>xun.zhang@isep.fr</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Institut Supérieur D'électronique de Paris</institution>
          ,
          <addr-line>10 rue de vanves, Issy-les-moulineaux, 92130</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Nanchang University</institution>
          ,
          <addr-line>999 Xuefu Avenue, Honggutan District, Nanchang, 330031, Jiangxi</addr-line>
          ,
          <country country="CN">China</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Visible light positioning (VLP) technology has attracted widespread research interest owing to its high-precision positioning, immunity to electromagnetic interference, abundant spectral resources, and cost-efectiveness. However, the region of interest (ROI) in existing camera-enabled VLP systems is highly susceptible to environmental noise, which degrades LED beacon recognition accuracy and complicates the simultaneous realization of precise positioning and stable lighting. To tackle the challenges, a novel indoor camera-enabled VLP systems is proposed. Specifically, a superglue-based tracking detection and dynamic positioning algorithm is proposed to replace the traditional LED area detection method relying on pixel intensity, thereby significantly enhancing the robustness of the VLP system while ensuring real-time performance and high accuracy. Moreover, a VLP testbench is developed to evaluate the positioning performance. Experimental results demonstrate that within a test area of 4 × 4 × 3m 3, 80% of the positioning errors are within 8.2cm. It indicates that the proposed system exhibits superior positioning accuracy compared to existing positioning methods.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Visible light positioning</kwd>
        <kwd>image sensor</kwd>
        <kwd>region of interest</kwd>
        <kwd>feature extraction</kwd>
        <kwd>superglue</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        With the rapid advancement of the Internet of Things (IoT) and the proliferation of smart devices,
coupled with the fact that a significant proportion of human activities occur indoors, indoor positioning
has emerged as one of the core enabling technologies, attracting increasing attention from both academia
and industry [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. In response to the growing demand for accurate and reliable indoor positioning, a
variety of technologies have been developed. In this context, traditional positioning systems, including
Bluetooth, Wireless Fidelity (Wi-Fi), and ultra-wideband (UWB) have exhibited considerable potential
and broad applicability [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Nevertheless, these radio frequency (RF)-based positioning systems face
challenges in balancing positioning accuracy with hardware cost and are susceptible to electromagnetic
interference, particularly in scenarios involving dense terminal deployments in indoor environments
[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Therefore, it is imperative to explore complementary technologies to address the growing demand
for wide-area and high-precision positioning in complex indoor environments.
      </p>
      <p>
        Visible light positioning has emerged as a promising solution by harnessing existing light emitting
diode (LED) infrastructure and abundant visible light spectrum to support wide-area positioning
services with cost-efective deployment, while its short wavelength enables centimeter-level positioning
accuracy [
        <xref ref-type="bibr" rid="ref4 ref5">4, 5</xref>
        ]. Moreover, VLP is suitable for environments with severe RF interference or sensitive
to electromagnetic interference due to the inherent electromagnetic interference immunity of visible
light[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
      <p>
        Compared to photodiode (PD)-based VLP systems, camera-based VLP systems ofer several
distinct advantages, including higher spatial resolution through detailed image capture, the ability to
simultaneously track multiple targets without requiring array deployment, and the integration of rich
environmental data for enhanced accuracy and robustness[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. Although the camera-based positioning
method demonstrates greater adaptability to ambient light interference compared to PD-based systems,
it still exhibits significant limitations in dynamic environments, particularly in terms of insuficient
robustness and suboptimal real-time positioning performance [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Considering this facts, the authors in
[
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] designed and implemented a robust and flexible indoor VLP system based on collaborative LEDs
and edge computing, which addressed the failure of existing collaborative LED positioning algorithms
under smartphone receiver rotation or tilt and achieved an efective balance between bandwidth and
computing resources. Moreover, a loosely coupled fusion method integrating rolling shutter
cameraenabled VLP with inertial data from an inertial measurement unit (IMU) was proposed to significantly
enhance the localization robustness of VLP systems under adverse operational conditions, such as
insuficiency LED transmission power, dynamic LED state switching, and ambient light interference
[
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. Besides addressing the positioning errors induced by terminal angle rotation and noise
interference in the aforementioned works, the recognition of regions of interest and the extraction of LED
beacon identifiers (IDs) present another significant technical challenge. The authors proposed an image
tracking algorithm based on threshold segmentation and geometric feature analysis of LED arrays. By
constructing candidate ROIs and selecting the near-rectangular structure closest to the image center as
the optimal ROI, the algorithm efectively mitigates natural light interference and achieves real-time and
accurate tracking and positioning of LED arrays [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. It should be noted that this approach requires the
LED array to be deployed in a rectangular form. However, in many practical scenarios, the deployment
of LEDs is irregular, which poses a significant challenge to the applicability of the method.
      </p>
      <p>To address the aforementioned challenges, this paper proposes a superglue-based positioning method
aimed at enhancing the anti-interference capability of indoor CMOS cameras-enabled VLP systems.
The performance of the proposed system is validated through a VLP testbench. The main contributions
of this paper are summarized as follows.</p>
      <p>1) A novel dual-LED VLP method is proposed in this paper. Specifically, the camera-based receiver
simultaneously captures the IDs of two LED beacons and estimates its own location information
based on the LED location information embedded in the IDs. To accurately detect the ROI of
LED, an improved superglue graph neural network (GNN) algorithm is introduced, replacing
the conventional threshold-based ROI detection method to enhance LED recognition accuracy.
Meanwhile, an adaptive threshold mechanism is designed to filter out high-intensity pixel clusters
in the binary image. Moreover, ambient light interference is mitigated through bidirectional row
and column edge detection combined with inter-row pixel diference analysis.
2) A semi-physical VLP experimental testbed is constructed. Specifically, the LED beacon employs
Manchester coding and on-of keying (OOK) modulation to transmit ID information, while the
CMOS-based receiver is mounted on a two-dimensional mobile platform. Experimental results
demonstrate that the detection performance of the proposed modified superglue detection method
is significantly better than the baseline solution. Based on this, the positioning performance of
the VLP system proposed in this paper can reach the centimeter level, 80% of the positioning
errors are within 8.2cm in a 4 × 4 × 3m 3 test area.</p>
      <p>The remainder of this paper is organized as follows. Section II presents the VLP system framework.
The superglue-based positioning method is proposed in Section III. Section IV details the
implementation of FPGA-based VLP testbench and provides comprehensive analysis of positioning performance
evaluation. Finally, Section V concludes the paper.</p>
      <p>LED1</p>
      <p>LED2</p>
      <sec id="sec-1-1">
        <title>LED Driver Circuit</title>
      </sec>
      <sec id="sec-1-2">
        <title>LED driving circuit</title>
        <p>X
Y</p>
      </sec>
      <sec id="sec-1-3">
        <title>CMOS Camera</title>
        <p>Positioning Terminal</p>
      </sec>
      <sec id="sec-1-4">
        <title>FPGA</title>
        <p>2. System Model and Positioning Principle</p>
        <sec id="sec-1-4-1">
          <title>2.1. System Model</title>
          <p>In a typical indoor environment, VLP systems must ensure uniform lighting while providing
highprecision positioning services . To achieve this, an even number of lamps are often deployed uniformly
indoors. As shown in Fig. 1, two LED lamps, serving as positioning transmitting beacons, are evenly
distributed in the considered VLP system. It is important to note that the three-dimensional (3D)
coordinates of the LED beacon corresponding to each ID are unique. At the receiving end, a common
CMOS image sensor is employed to capture beacon information. This system requires only two
LED beacons for 3D positioning. By simultaneously capturing their signals, the receiver’s location is
estimated based on the known beacon positions (c.f. ID) and the tranceiver distance.</p>
        </sec>
        <sec id="sec-1-4-2">
          <title>2.2. Positioning principle</title>
          <p>Diferent from conventional positioning methods that rely on three or more LED beacons for position
estimation, the proposed VLP system achieves accurate positioning using only two LED beacons. The
principle of the proposed VLP system is elucidated through geometric analysis.</p>
          <p>As illustrated in Fig. 2, the world coordinates of the two LED beacons in the proposed VLP system
are defined as (1, 1, 1) and (2, 2, 2). Their projected coordinates (, ),  = 1, 2 on the
CMOS-based receiver plane are obtained through the perspective projection of the LED optical centers
via the imaging lens. For deployment cost and aesthetic considerations, the LED beacons are typically
installed at the same horizontal height. In other words, the -axis coordinate of the LED beacons in the
world coordinate system is same, i.e., (1 = 2). Therefore, the Euclidean distance LED between two
LED beacons in the world coordinate system can be determined as follows.</p>
          <p>√︁
LED =</p>
          <p>(1 −  2)2 + (1 −  2)2.</p>
          <p>It is assumed that the center point of the camera lens, denoted (, , ) and referred to as ‘lens’ in
Fig. 2, serves as the receiver end location. Moreover, the projection coordinates of the ‘lens’ on the LED
beacon plane and the imaging plane are defined as (, , ) and ′ (0, 0), respectively. Note that
the coordinates ′ is the the pixel coordinates and is set to (0, 0) for simplicity. Thus, the distances 
from point  to the two LED beacons are expressed as
, =
√︁
( −  )2 + ( −  )2,
(1)
(2)
LED1
dA,1
dLED
dA,2</p>
          <p>LED2
H
f</p>
          <p>lens
P
1</p>
          <p>p12 P2
pA，1 A pA,2</p>
          <p>pixel plane
where  = 1, 2 is the -th LED beacon. Moreover, the distance from the projection point of the LED
beacon to the centroid ′ of the LED pixel area can be calculated as
Then, the Euclidean distance between the two LED beacons projected in the imaging plane is given as</p>
          <p>Since the plane between the image sensor and the LED beacon is parallel, the vertical height 
between the image sensor and the LED beacon can be derived using the principle of similar triangles as
′ , = √︁2 + 2 .</p>
          <p>
            Thus, the  of the receiver terminal coordinate can be determined as  = 1 −  . Given that
the imaging plane and the world coordinate system are parallel with coincident axis orientations, the
following relationship can be derived based on the geometric similarity of triangles
The values  and  of can be solved based on the aforementioned geometric relationships (2) .
3. Superglue-enabled ROI Detection Method
Based on the positioning principle described in the previous section, the proposed VLP method lies in
accurately finding the projection points 1 and 2 of the LED on the imaging plane. However, since
the signal power attenuation is inversely proportional to the fourth power of distance, the presence
of ambient light and noise further degrades the beacon detection performance in the imaging system.
(3)
(4)
(5)
(6)
Therefore, it is essential to design an ROI detection algorithm for the LED beacons. Considering that the
illumination of the VLP system varies with environmental conditions, and some image frames captured
by the camera are blurred due to human movement or shadow efects, the superglue-based graph
neural network (GNN) is introduced to tackle these issue. It employs an attention mechanism to learn
illumination-invariant features and utilizes its global context modeling capacity to match LED with time
series or multi-view information to solve the problem of single-frame image blur[
            <xref ref-type="bibr" rid="ref12">12</xref>
            ]. In this context,
since superglue is fundamentally a general feature matching network, it requires specific modifications
to achieve eficient ROI detection of LEDs. Specifically, the LED beacons exhibit significantly higher
pixel intensity in camera-captured images, the algorithm can leverage these high-intensity pixels as
primary feature matching indicators. This approach enables targeted feature matching for LED detection
while maintaining real-time performance.
          </p>
          <p>Input image It
Input image It1
...</p>
          <p>Self attention Cross attention</p>
          <p>P Sinkhorn ALgorithm</p>
          <p>( T iteration)
M</p>
          <p>Adaptive
threshold</p>
          <p>Si,j i</p>
          <p>j
It (Pi)
It1(Pi)</p>
          <p>fiA f jB
Score Matrix
Input It
Input It1</p>
          <p>The basic principle of the proposed method is that the key points within image typically represent
projections of salient 3D points, and the ROI detection of LEDs is achieved by partially matching local
features between the two images. In other words, a valid LED beacon is successfully detected if region
of interest (ROI) area matching degree between the two images is the highest. Therefore, ROI detection
can be formulated as an optimization problem, which involves predicting the matching matrix P from
two sets of local features using a GNN-based model. In this context, the structure of the modified
superglue-based GNN feature matching method is shown in Fig. 3.</p>
          <p>The model inputs two consecutive images  and , and then outputs a set of key point positions
 and their associated visual descriptors  through image preprocessing and CNN extraction, where
 ≜ (; ; ) consists of image coordinates (, ) and detection confidence . Images  and  can
obtain  and  sets of local features (; ), respectively. Then, a keypoint encoder sub-model is
employed to map keypoint positions  and their visual descriptors  into a single vector x. Moreover,
these vectors are input into the attention GNN network consisting of  layers of self-attention and
cross-attention modules to simultaneously capture the spatial relationship and visual features of key
points and to obtain a more powerful matching descriptors . Note that the matching process is
formulated as:
where  (·) represents function of the attention GNN. Based on these A and B, a score matrix is
constructed and the relationship is described as follows.</p>
          <p>=  (x) ,
 =  (x ) ,
S, = ⟨, ⟩,
∀(, ) ∈  × ,</p>
          <p>(7)
(8)
where ⟨·, ·⟩ is the inner product. Note that the magnitude of the matching descriptors varies based
on the features and is adjusted during training to reflect the prediction confidence. Then, the optimal
matching matrix P is obtained via diferentiable optimal transport and is given as
(P) = −</p>
          <p>P = argmin ∑︁ , , + (P),</p>
          <p>P∈P ,
∑︁ , (log , − 1),
,
where P represents the set of matching matrices. (P) represents the entropy regularization term,
which smooths the optimization problem and mitigates overfitting to local optimal solutions. It is solved
by the Sinkhorn algorithm.</p>
          <p>To address ambient lighting variations and improve the robustness of LED detection, a dynamic
adaptive threshold  auto is introduced to replace the fixed intensity threshold and it is given as
 auto = max(  + 3 , 230)
where   represents mean intensity of  and   is standard deviation of . Note that the  =
() ∪ ( ) is the union of pixel intensities from consecutive images  and . Moreover, the
constant 230 serves as an empirical lower bound to ensure detectability in extreme low-light scenarios.
Then, the subset ℳ of features greater than the feature threshold  auto is given as
Finally, the match results are obtained by</p>
          <p>︂{
ℳ =
(, ) |
2</p>
          <p>︂}
(p) + (p )  auto</p>
          <p>≥
 ℳ* =  * ⊙ ,
 , =
{︃1 (, ) ∈ ℳ
0 otherwise
where ⊙ resprents the Hadamard product.
4. Experimental Testbed and Result Analysis</p>
        </sec>
        <sec id="sec-1-4-3">
          <title>4.1. Semi-physical experimental testbed</title>
          <p>As shown in Fig. 4, a semi-physical experimental testbed is constructed in a typical indoor scenario
with a size of 4 × 4 × 3m
Specifically, two commercial LED emitters are installed on the ceiling with a 2.5m spacing centered in the
room. To ensure uniform lighting and stable signal transmission, the transmitter employs Manchester
coding and OOK modulation technology to encode and modulate the beacon ID. The signal operates
within the 1˘10kHz frequency range, and the LED beacon is driven by a constant-current source. The
modulated optical signal is captured by an OV5640 CMOS camera (operating at 60 fps with 640×480
resolution) mounted on a ground robotic platform, with the camera constrained to an unwavering
vertical orientation facing upward. The acquired data undergoes ofline MATLAB processing for feature
extraction and precision position estimation.</p>
          <p>3 scenario to validate the efectiveness of the proposed positioning method.</p>
        </sec>
        <sec id="sec-1-4-4">
          <title>4.2. Results analysis</title>
          <p>In order to validate the efectiveness and superiority of the proposed ROI detection method, the
receiver is located directly below one LED beacon and the performance of ROI detection methods
is firstly compared. Specifically, Fig. 5 illustrates the detection accuracy of the proposed scheme,
Gaussian enhanced (GE) detection, and conventional regional pixel threshold segmentation (RPTS)
detection scheme at varying transceiver heights. It is apparent from Fig. 5 that the ROI detection
(9a)
(9b)
(10)
(11)
(12)
Position
Estimation</p>
          <p>CMOS Camera
CMOS-based receiver
transmission side</p>
          <p>LED circuit</p>
          <p>driving
Receiver Side</p>
          <p>CNN-Driven
Feature Extraction</p>
          <p>LED1</p>
          <p>LED2</p>
          <p>LED</p>
          <p>Superglue-Driven
LED ROI Localization</p>
          <p>Driver
LED-based transmitter
accuracy decreases as the transceiver distance increases. This is attributed to the more severe distance
attenuation at greater distances, which degrades the image quality. Moreover, it can be seen that the
proposed scheme significantly outperforms the two baseline schemes, achieving a detection accuracy
exceeding 80% at a distance of 2.5m, which is a more common indoor positioning distance. It is worth
noting that the proposed method could support longer distances if a higher-performance camera is
utilized.</p>
          <p>Proposed method
GE detection</p>
          <p>PRTS detection
1
0.8
y
c
a
r0.6
u
c
c
a
n
o
i
tc0.4
e
t
e
d
0
0
50
100 150 200
Detection Distance (cm)
250
300</p>
          <p>Secondly, beacon signals are collected at 256 evenly distributed grid points within the indoor area. To
ensure reliability of the result, each location is measured 20 times(400ms per trial) and the average value
is calculated, resulting in a total of 5120 measurement instances for analysis. As shown in Fig. 5, the
positioning results of 256 discrete measurement points under the aforementioned setup are compared
with the real position, and a two-dimensional positioning error density map smoothed by a Gaussian
kernel is utilized to quantify the spatial error characteristics. It is evident from Fig. 5 that the positioning
results near the center of the area closely match the real position, while those in the edge regions exhibit
certain deviations, and the worst-case error reaching approximately 0.08 meters. This is due to the
limited field of view of the transceiver device, which results in partial image distortion captured by
the camera at the edge point. However, the overall positioning performance achieves centimeter-level
Real position
Estimated position
3.5
2.5
)
(m 2
y
1.5</p>
          <p>1
0.5</p>
          <p>00
accuracy.</p>
          <p>2
x (m)
(a)
1
0.9
0.8
0.7
0.6
FD0.5
C
0.4
0.3
0.2
0.1</p>
          <p>Finally, the cumulative distribution function (CDF) of positioning errors at two diferent LED beacon
heights are compared and analyzed to accurately evaluate the positioning performance of the proposed
VLP system. Fig. 7 illustrates the CDF curves of positioning errors at vertical heights of 2.0m and 2.5m.
As depicted in the figure, the positioning error of the VLP system increases with height. Specifically,
the proposed system achieves a positioning error of 5.2cm at an 80% confidence level for a height of
2.0m outperforming the 8.2cm error observed at a height of 2.5m. Notably, the proposed positioning
method achieves centimeter-level positioning accuracy using only two LED beacons.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>5. Conclusion</title>
      <p>This study proposes a high-precision indoor positioning method based on SuperGlue. The method
requires the camera to capture only two LED beacons and performs positioning estimation using an
ID-geolocation database and geometric relationships. To address the challenge of LED detection and
recognition being susceptible to ambient light interference, a GNN-based modified superglue method is
designed to accurately detect the ROI of LED beacons in the image. Moreover, an adaptive threshold
detection module is introduced to enhance the robustness of the method. A semi-physical VLP testbed is
constructed to evaluate the efectiveness of the proposed approach. Experimental evaluations conducted
in a typical indoor scenario area demonstrate that the proposed method achieves significantly more
accurate ROI detection compared to the baseline solution. Furthermore, 80% of the positioning errors
of the proposed VLP system are within 8.2cm.</p>
    </sec>
    <sec id="sec-3">
      <title>Acknowledgments</title>
      <p>This work was supported in part by the National Natural Science Foundation of China (Grant
62561039 and 62161023); in part by the Young Natural Science Foundation of Jiangxi Province (Grant
20224BAB212004); and in part by the University-Industry Collaborative Education Program (Grant
231002455265614).</p>
    </sec>
    <sec id="sec-4">
      <title>Declaration on Generative AI</title>
      <p>This paper have not employed any Generative AI tools.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Tyagi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Ali</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D.</given-names>
            <surname>Kwak</surname>
          </string-name>
          , “
          <article-title>A systematic review of contemporary indoor positioning systems: Taxonomy, techniques, and algorithms</article-title>
          ,” IEEE Internet Things J., vol.
          <volume>11</volume>
          , no.
          <issue>27</issue>
          , pp.
          <fpage>34717</fpage>
          -
          <lpage>34733</lpage>
          , Nov.
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>P. S.</given-names>
            <surname>Farahsari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Farahzadi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Rezazadeh</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Bagheri</surname>
          </string-name>
          , “
          <article-title>A survey on indoor positioning systems for IoT-based applications</article-title>
          ,” IEEE Internet Things J., vol.
          <volume>9</volume>
          , no.
          <issue>10</issue>
          , pp.
          <fpage>7680</fpage>
          -
          <lpage>7699</lpage>
          , May.
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Bastiaens</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Alijani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Joseph</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D.</given-names>
            <surname>Plets</surname>
          </string-name>
          , “
          <article-title>Visible light positioning as a next-generation indoor positioning technology: A Tutorial,” IEEE Commun</article-title>
          . Surveys Tut., vol.
          <volume>26</volume>
          , no.
          <issue>4</issue>
          , pp.
          <fpage>2867</fpage>
          -
          <lpage>2913</lpage>
          ,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Zhou</surname>
          </string-name>
          , and
          <string-name>
            <given-names>O. A.</given-names>
            <surname>Dobre</surname>
          </string-name>
          , “
          <article-title>Channel and location estimation enabled a novel BDCNP network for massive MIMO VLCP systems,” IEEE Wireless Commun</article-title>
          .
          <source>Lett.</source>
          , vol.
          <volume>13</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>218</fpage>
          -
          <lpage>222</lpage>
          , Jan.
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          , and
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , “
          <article-title>PoE-enabled visible light positioning network with low bandwidth requirement and high precision pulse reconstruction</article-title>
          ,
          <source>” IEEE J. Indoor and Seamless Positioning and Navigation</source>
          , vol.
          <volume>2</volume>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>11</lpage>
          , Jan.
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>X.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Luo</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R. Q.</given-names>
            <surname>Hu</surname>
          </string-name>
          , “
          <article-title>BER analysis of NOMA-Enabled visible light communication systems with diferent modulations,”</article-title>
          <source>IEEE Trans. Veh</source>
          . Technol., vol.
          <volume>68</volume>
          , no.
          <issue>11</issue>
          , pp.
          <fpage>10807</fpage>
          -
          <lpage>10821</lpage>
          , Nov.
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>J.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Zeng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Yang</surname>
          </string-name>
          , and W. Guan, “
          <article-title>High accuracy, 6-DoF simultaneous localization and calibration using visible light positioning,”</article-title>
          <string-name>
            <given-names>J.</given-names>
            <surname>Lightw</surname>
          </string-name>
          . Technol., vol.
          <volume>40</volume>
          , no.
          <issue>21</issue>
          , pp.
          <fpage>7039</fpage>
          -
          <lpage>7047</lpage>
          , Nov.
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>X.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhuang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Shi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Cao</surname>
          </string-name>
          , and
          <string-name>
            <given-names>B.</given-names>
            <surname>Zhou</surname>
          </string-name>
          , “
          <article-title>RatioVLP: Ambient light noise evaluation and suppression in the visible light positioning system</article-title>
          ,
          <source>” IEEE Trans. Mobile Comput.</source>
          , vol.
          <volume>23</volume>
          , no.
          <issue>5</issue>
          , pp.
          <fpage>5755</fpage>
          -
          <lpage>5769</lpage>
          , May.
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>X.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Guo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Yang</surname>
          </string-name>
          , and
          <string-name>
            <given-names>X.</given-names>
            <surname>Wei</surname>
          </string-name>
          , “
          <article-title>Visible light positioning based on collaborative LEDs and edge computing,”</article-title>
          <source>IEEE Trans. Comput. Social Syst.</source>
          , vol.
          <volume>9</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>324</fpage>
          -
          <lpage>335</lpage>
          , Feb.
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>W.</given-names>
            <surname>Guan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Hussain</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C. P.</given-names>
            <surname>Yue</surname>
          </string-name>
          , “
          <article-title>Robust robotic localization using visible light positioning and inertial fusion,” IEEE Sensors J</article-title>
          ., vol.
          <volume>22</volume>
          , no.
          <issue>6</issue>
          , pp.
          <fpage>4882</fpage>
          -
          <lpage>4892</lpage>
          , Mar.
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>L.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Feng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , H. Zhang, and
          <string-name>
            <given-names>Z.</given-names>
            <surname>Xue</surname>
          </string-name>
          , “
          <article-title>Hybrid indoor positioning system based on LED array,” IEEE Photon</article-title>
          . J., vol.
          <volume>17</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>11</lpage>
          , Apr.
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>P. -E. Sarlin</surname>
          </string-name>
          , D. DeTone, T. Malisiewicz,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Rabinovich</surname>
          </string-name>
          , “
          <article-title>SuperGlue: Learning feature matching with graph neural networks</article-title>
          ,
          <source>” in 2020 IEEE/CVF Conf. on Comput. Vision and Pattern Recognition (CVPR)</source>
          , Seattle, WA, USA,
          <year>2020</year>
          , pp.
          <fpage>4937</fpage>
          -
          <lpage>4946</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>