=Paper= {{Paper |id=Vol-3248/paper1 |storemode=property |title=PSO-SVR Aided Robust EKF Algorithm for GNSS/INS Integrated Navigation |pdfUrl=https://ceur-ws.org/Vol-3248/paper1.pdf |volume=Vol-3248 |authors=Xu Liu,Jian Wang,Houzeng Han |dblpUrl=https://dblp.org/rec/conf/ipin/LiuWH22 }} ==PSO-SVR Aided Robust EKF Algorithm for GNSS/INS Integrated Navigation== https://ceur-ws.org/Vol-3248/paper1.pdf
PSO-SVR Aided Robust EKF Algorithm for GNSS/INS Integrated
Navigation
Xu Liu 1, Jian Wang 1and Houzeng Han 1
1
    Beijing University of Civil Engineering and Architecture, No.1 Exhibition Hall Road, Xicheng District, Beijing,
    100044, China

                  Abstract
                  The traditional robust extended Kalman filter (EKF) usually fails to resist small and continuous
                  outliers effectively when the abnormality occurs in GNSS data. To solve this issue, we propose
                  a PSO-SVR aided robust EKF for GNSS/INS integrated navigation. First, this paper uses the
                  particle swarm optimization (PSO) algorithm to optimize the SVR kernel function and penalty
                  coefficient as well as establishes the PSO-SVR intelligent prediction model. Next, on the basis
                  of robust EKF, a PSO-SVR model is employed to predict the navigation algorithm during
                  abnormal periods of GNSS. Finally, this algorithm is verified by the vehicle field test, and the
                  robustness of this algorithm to different outliers is further tested by artificially adding small
                  and continuous outliers of different sizes. According to the experimental results, compared
                  with the REKF algorithm, this PSO-SVR-REKF algorithm can resist outliers effectively
                  whether the outlier of a single point is significant or not. A 10s-abnormality of GNSS data can
                  bring out an improvement of accuracy by 64%.
                  Keywords 1
                  GNSS/INS integrated navigation, extended Kalman filter, support vector regression, particle
                  swarm optimization

1. Introduction
    Recently, the GNSS prediction combined with machine learning algorithms has become a popular
approach to solve the problem of poor accuracy of GNSS/INS integrated navigation systems in
abnormal periods of GNSS [1]. The common machine learning algorithms to assist GNSS/INS
integrated navigation include Support Vector Machine (SVM) [2, 3], Recurrent Neural Network (RNN)
[4, 5], Residual Attention Network (RAN) [6], Long Short-Term Memory (LSTM) [7, 8], Gated
Controlled Recurrent Unit (GRU) [9], etc. The above algorithms can be divided into intelligent
prediction based on neural network and regression prediction based on statistics, both of which are of
shallow network layers to conduct time-series prediction easily. However, the prediction algorithm
based on neural networks may be easily puzzled by local optimization, uncertain network topology, the
β€œcurse of dimensionality”, and other issues [10, 11]. SVR, based on the statistical learning theory, is a
method proposed to study statistical regression and prediction problems that can help solve small
samples, nonlinearity, high dimension, and local minimum issues. Hence, it is very suitable for
predicting GNSS abnormal information, and it is verified to be more efficient than the neural-network
prediction algorithm.
    However, to construct the SVR model, the parameter optimization must be carried out since
choosing training parameters by experience will affect the regression accuracy and generalization
ability. Though being widely applied in the navigation system, the genetic algorithm-support vector
regression (GA-SVR) model is of low-level efficiency, as it requires complex processes such as
selection, crossover, and mutation during the operation process [12]. Particle Swarm Optimization (PSO)
is a population-based parallel optimization technique with few adjustable parameters, fast convergence
speed, simple and easy operation, which is suitable for processing real-time navigation data [13].



1
 IPIN 2022 WiP Proceedings, September 5 - 7, 2022, Beijing, China
EMAIL: liuxucrystal@163com (Xu Liu); wangjian@bucea.edu.cn (Jian Wang); hanhouzeng @ bucea.edu.cn (Houzeng Han)
            ©️ 2020 Copyright for this paper by its authors.
            Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
            CEUR Workshop Proceedings (CEUR-WS.org)
   Based on this, this paper first constructs an intelligent prediction model based on PSO-SVR
optimized kernel function parameters and penalty coefficients, taking the angular rate and acceleration
velocity of the inertial navigation system (INS), the velocity and attitude estimations by mechanization
equations as the input, and the increment of GNSS/INS integrated navigation solution as the output;
Next, this paper investigates that how the PSO-SVR aided robust EKF can diagnose and repair the
abnormality when single-point and continuous abnormality of GNSS signals to predict the navigation
solution during the abnormal period of GNSS and improve the EKF performance to suppress outliers.

2. GNSS/INS robust EKF model

    2.1. Extented Kalman filter

   The standard Kalman filtering algorithm generally assumes that the system and observation
equations are linear. However, GNSS/INS tight coupled navigation system usually cannot satisfy this
assumption. EKF algorithm can realize the linear approximation of nonlinear systems to further improve
the solution accuracy. The nonlinear system is assumed by the following expression:
                                 π‘₯π‘˜ = π‘“π‘˜βˆ’1 (π‘₯π‘˜βˆ’1 ) + π‘€π‘˜ , π‘€π‘˜ ~𝑁(0, π‘„π‘˜ )                                   (1)
                                    π‘§π‘˜ = β„Žπ‘˜ (π‘₯π‘˜ ) + π‘£π‘˜ , π‘£π‘˜ ~𝑁(0, π‘…π‘˜ )                                    (2)
   Whereinπ‘₯π‘˜ and π‘₯π‘˜βˆ’1 are state vectors at the time π‘˜ and π‘˜ βˆ’ 1 respectively; π‘€π‘˜ and π‘£π‘˜ are random
noise; π‘“π‘˜βˆ’1 (βˆ™) is the state transition function; β„Žπ‘˜ (βˆ™) is the transfer function between the state and
observation vectors; noise variance matrix in dynamic systems π‘„π‘˜ and observation noise variance
matrix π‘…π‘˜ can be preset. The first-order discrete-time extended Kalman filter is predicted as follows:
                                         π‘₯Μ‚π‘˜ (βˆ’) = π‘“π‘˜βˆ’1 (π‘₯Μ‚π‘˜βˆ’1 (+))                                       (3)
                                              π‘§Μ‚π‘˜ = β„Žπ‘˜ (π‘₯Μ‚π‘˜ (βˆ’))                                          (4)
   Filter valuation and its corresponding covariance matrix is:
                          π‘₯Μ‚π‘˜ (+) = π‘₯Μ‚π‘˜ (βˆ’) + 𝐾    Μ…π‘˜ (π‘§π‘˜ βˆ’ π‘§Μ‚π‘˜ ) = π‘₯Μ‚π‘˜ (βˆ’) + 𝐾 Μ…π‘˜ π‘‰π‘˜                     (5)
                                                         Μ…
                                      π‘ƒπ‘˜ (+) = [𝐼 βˆ’ πΎπ‘˜ (π»π‘˜ )]π‘ƒπ‘˜ (βˆ’)                                       (6)
   Wherein the prediction covariance matrix is:
                                  π‘ƒπ‘˜ (βˆ’) = π›·π‘˜βˆ’1 π‘ƒπ‘˜βˆ’1 (+)π›·π‘˜βˆ’1     𝑇
                                                                     + π‘„π‘˜βˆ’1                               (7)
   The gain matrix of EKF is:
                                  Μ…π‘˜ = π‘ƒπ‘˜ (βˆ’)π»π‘˜π‘‡ [π»π‘˜ π‘ƒπ‘˜ (βˆ’)π»π‘˜π‘‡ + π‘…π‘˜ ]βˆ’1
                                  𝐾                                                                       (8)
   The prediction residual is π‘‰π‘˜ = π‘§π‘˜ βˆ’ π‘§Μ‚π‘˜ , The GNSS/INS navigation system can be considered
nearly linear but not absolutely linear, and EKF can effectively solve nonlinear problems to deliver
better state estimation. In addition, as the first-order estimation is taken into the Taylor series expansion,
the prediction residual is enough to describe dynamic characteristics even though failing to represent
the real value.

    2.2. Robust extented Kalman filter
   The equivalent EKF gain matrix is constructed, a similar expression to the IGG β…’ weight function:
                                            ̅𝑖𝑗 , 𝑠𝑗 ≀ π‘˜0
                                            𝐾
                                        π‘˜       π‘˜ βˆ’ 𝑠𝑗 2
                         ̃𝑖𝑗 = 𝐾
                         𝐾        ̅𝑖𝑗 Γ— 0 Γ— [ 1         ] , π‘˜0 ≀ 𝑠𝑗 ≀ π‘˜1                       (9)
                                        𝑠𝑗     π‘˜1 βˆ’ π‘˜0
                               {             0, 𝑠𝑗 > π‘˜1

   Wherein π‘˜0 and π‘˜1 are threshold parameters with π‘˜0 as 2.5-3.5 and π‘˜0 as 3.5-4.5; , wherein 𝑖 and 𝑗
denote the dimensions of state and observation vectors respectively, and πœŽπ‘— refers to the prediction
residual, redundant observation component, and measurement standard deviation of the observation
vector separately, and the iteration will be carried out after each update.
   Given the number of iterations t, the state prediction value and the prediction residual are as follows:
                                          π‘₯π‘˜,𝑑 (βˆ’) = π‘₯π‘˜,π‘‘βˆ’1 (+)                                       (10)
                                    π‘‰π‘˜,𝑑 (βˆ’) = π‘§π‘˜ βˆ’ π»π‘˜ π‘₯π‘˜,𝑑 (βˆ’)                                        (11)
   Wherein the state prediction value π‘§π‘˜,𝑑 (βˆ’) at the t iteration is confirmed by the state filter value and
its prediction residual after the T-1 iteration. After calculating the equivalent gain matrix according to
Equation (9), we can obtain the robust filter value as
                                    π‘₯π‘˜,𝑑 (+) = π‘₯π‘˜,π‘‘βˆ’1 (βˆ’) + 𝐾      Μƒ π‘‰π‘˜,𝑑                            (12)
   If the difference between π‘₯π‘˜,𝑑 (+) and π‘₯π‘˜,π‘‘βˆ’1 (βˆ’) is the estimated value of standard EKF at the time
k, and the posterior covariance matrix will be:
                                      π‘ƒπ‘˜ (+) = [𝐼 βˆ’ 𝐾 Μƒπ‘˜,𝑑 ]π‘ƒπ‘˜ (βˆ’)                                (13)
           Μƒπ‘˜,𝑑 means the final equivalent Kalman filter gain matrix at the end of an iteration.
   Wherein 𝐾

3. PSO-SVR training regression model

    3.1. Support vector regression( SVR )

   SVR, one of machine learning methods, is commonly employed to study statistical regression and
data prediction to solve the problems of small samples, nonlinearity, high dimensions, and local
minimum. The basic concept of SVR is to map the dataset x to high-dimensional feature space for
regression analysis and prediction by defining a kind of nonlinear transformation. Given the training
          ( xi , yi ), xi οƒŽ R, yi οƒŽ R, i = 1 n
samples                                         , we will set the objective regression function:
                                               𝑓(π‘₯) = βŸ¨π‘€ β‹… π‘₯⟩ + 𝑏                          (14οΌ‰
   To obtain the optimal parameters w    b
                                      and , we establish the optimization function as follows:
                                           1
                                     π‘šπ‘–π‘› 2 βŸ¨π‘€ β‹… π‘€βŸ© + 𝐢 βˆ‘π‘›π‘–=1(πœ‰π‘– + πœ‰π‘–βˆ— )                               (15οΌ‰
   with constraint condition
                         𝑦𝑖 βˆ’ βŸ¨π‘€ β‹… π‘₯𝑖 ⟩ + 𝑏 ≀ πœ€ + πœ‰π‘– , βŸ¨π‘€ β‹… π‘₯𝑖 ⟩ βˆ’ 𝑦𝑖 + 𝑏 ≀ πœ€ + πœ‰π‘–βˆ— .
   Wherein C is the penalty factor to weigh the overall model and sample errors. The larger the value
of C , the higher level of data fitting. ο₯ refers to the loss function used in reflecting the generalization
ability of models. The larger of ο₯ , the lower level of data fitting. The above functions are transformed
into the following optimal problem through the kernel method and corresponding duality theory.
                                                               1
      (𝛼, 𝛼 βˆ— ) = βˆ‘π‘›π‘–=1 𝑦𝑖 (𝛼𝑖 βˆ’ 𝛼 βˆ—π‘– ) βˆ’ πœ€ βˆ‘π‘›π‘–=1(𝛼𝑖 + π›Όπ‘–βˆ— ) βˆ’ 2 βˆ‘π‘›π‘–=1,𝑗=1(𝛼𝑖 βˆ’ π›Όπ‘–βˆ— ) (𝛼𝑗 βˆ’ π›Όπ‘—βˆ— )𝐾(π‘₯𝑖 , π‘₯𝑗 ) (16οΌ‰
   with constraint condition
                𝑛
            βˆ‘       (𝛼𝑖 βˆ’ π›Όπ‘–βˆ— ) = 0,0 ≀ 𝛼𝑖 ≀ 𝐢, 𝑖 = 1,2, β‹― 𝑛, 𝑗 = 1,2, β‹― 𝑛, 0 ≀ π›Όπ‘–βˆ— ≀ 𝐢
                𝑖=1


   Then
     𝑀 = βˆ‘π‘›π‘–=1(𝛼𝑖 βˆ’ π›Όπ‘–βˆ— ) π‘₯𝑖 οΌŒπ‘ = 𝑦𝑗 βˆ’ βˆ‘π‘›π‘–=1(𝛼𝑖 βˆ’ π›Όπ‘–βˆ— ) (π‘₯𝑖 β‹… π‘₯𝑗 ) + πœ€οΌŒ i = 1, 2,            n, j = 1, 2,   n
   The objective regression function is obtained as follows:
                                𝑓(π‘₯) = βˆ‘π‘›π‘–=1(𝛼𝑖 βˆ’ π›Όπ‘–βˆ— ) 𝐾(π‘₯, π‘₯𝑖 ) + 𝑏                                 (17οΌ‰

    3.2. Particle swarm optimization
    The PSO algorithm first initializes a group of particles in the feasible solution space, each of which
represents a potential optimal solution to extremal optimization. The particle characteristics are indi-
cated by three indexes: position, velocity, and fitness value. The fitness value can be calculated by the
fitness function, which reflects the advantages and disadvantages of particles. The moving particle in
the solution space updates its individual position by tracking the individual extremum Pbest and the
group extremum Gbest; while the individual extremum Pbest refers to the optimal position of fitness
value calculated from the position experienced by the individual, and the group extreme Gbest means
the optimal position of fitness searched by all particles in the population. The fitness value will be
calculated upon each update of the particle position, and the positions of individual extremum Pbest
and group extremum Gbest are updated by comparing the fitness value of the new particle with those
of individual extreme and group extremums.
    Assume that a D-dimensional space for searching has a population X = (X1 , X 2 , . . . X n ) which in-
cludes n number of particles, among which the i-th particle is represented as a D-dimensional vector
X i = [xi1 , xi2 , . . . xiD ]T , representing the position of the i-th particle in the D-dimensional space as well
as a candidate solution. The fitness value corresponding to the position X i of each particle can be cal-
culated by the objective function. The velocity of the i-th particle is Vi = [Vi1 , Vi2 , . . . ViD ]T, its indi-
vidual extremum is Pi = [Pi1 , Pi2 , . . . PiD ]T , and the group extremum of the population is Pg =
                   T
[Pg1 , Pg2 , . . . PgD ] .
   In each iteration, the particle updates its velocity and position through the individual extremum and
group extremum under the following formula:
                             π‘˜+1      π‘˜            π‘˜       π‘˜             π‘˜       π‘˜
                           𝑉𝑖𝑑   = πœ”π‘‰π‘–π‘‘ + 𝑐1 π‘Ÿ1 (𝑃𝑖𝑑  βˆ’ 𝑋𝑖𝑑  ) + 𝑐2 π‘Ÿ2 (𝑃𝑔𝑑 βˆ’ 𝑋𝑖𝑑   )                   (18)
                                           π‘˜+1        π‘˜       π‘˜+1
                                         𝑋𝑖𝑑 = 𝑋𝑖𝑑 + 𝑉𝑖𝑑                                                (19)
   Wherein πœ” refers to the inertia weight with 𝑑=1, 2, ..., D; i = 1,2, . . . , n; k means the current num-
ber of iterations; 𝑉𝑖𝑑 indicates the velocity of particles; 𝑐1 and 𝑐2 are non-negative constants called ac-
celeration factors; and r1 and r2 are random numbers distributed between [0,1].
   The particle swarm optimization (PSO) algorithm is shown in Figure 1.




Figure 1: Flowchart for PSO algorithm
    Popular parameter optimization algorithms include genetic algorithms and swarm intelligence
optimization algorithms. To verify that the PSO-SVR model is more efficient than the GA-SVR model,
this paper compares their training performance and prediction accuracy (see Table 1), and figures out
that the calculation efficiency of PSO is about 7 times higher than the GA-SVR model with the same
accuracy.
Table 1
Comparison of root mean square error and calculation duration between GA-SVR and PSO-SVR models
            Model                 GA-SVR                                PS0-SVR
            RMS           Train            Predict          Train                 Predict
            North         0.191             0.386           0.137                  0.343
            East          0.250             0.444           0.346                  0.415
           Height         0.004             0.026           0.035                  0.027
            Mean          0.148             0.285           0.172                  0.262
           Time(s)                   368                                   57


4. PSO-SVR aided robust EKF algorithm
    The process of the PSO-SVR aided robust EKF algorithm is shown in Figure 2. On the condition of
no abnormality in GNSS data, we can enter the PSO-SVR training mode. The input of training samples
is the position, speed, and attitude calculated by INS mechanization equations, as well as the
acceleration and angular rate data output by INS. The output of training samples is the position
increment of the navigation solution. We use the input and output of training samples to train the PSO-
SVR model. When the abnormality occurs in GNSS, the training will stop and PSO-SVR prediction
mode starts, in which the input is still the position, velocity, and attitude output by INS mechanization
equations, as well as the acceleration and angular rate output by INS. Next, the input data will be put
into the well-trained PSO-SVR model to predict the increment of navigation solution; finally, the error
correction will be carried out in the position, velocity, and attitude calculated by INS to obtain the
estimated values.




Figure 2: PSO-SVR-aided robust EKF algorithm
Note: The blue line represents the training part and the green line suggests the prediction part

5. Experiment

    5.1. Data acquisition
   To verify how the PSO-SVR-aided robust EKF model can resist outliers when the abnormality
occurs in GNSS, we collected the data using the vehicle-running test method on November 21, 2021 in
an open factory area located at the South Fifth Ring Road of Beijing (see Figure 3).
Figure 3: Track of vehicle
   We adopted high-precision real-time positioning and navigation modules independently developed
by the group (see Figure 3), which integrates the Sensor’s high-precision inertial measurement unit
STIM300 and NovAtel’s OEM719 GNSS. Their performance parameters are shown in the following
Table 2. The modules are installed in the trunk of the vehicle, the GNSS antenna is mounted on the top
of the vehicle to attach with the odometer through cables, enabling the experimental data to be
transmitted to the laptop. RTK is supported by the domestic CORS, and the real-time positioning and
navigation module, testing vehicles, and equipment installation is suggested in Figure 4.
Table 2
Performance parameters of GNSS and MEMS IMU
       Equipment                                      Performance
                                                           BDS: B1/B2 GPS: L1/L2
                     Signal Tracking
                                                           GLONASS: L1/L2 GALILEO:E1/E5b
                                                           Horizontal: Β±1.5π‘š
                     Single point positioning (RMS)        Vertical: Β±3.0π‘š
          GNSS                                             Horizontal: Β±0.4π‘š
                     DGPS(RMS)
                                                           Vertical: Β±0.8π‘š
                                                           Horizontal: Β±8π‘π‘š
                     RTK(RMS)
                                                           Vertical: Β±15π‘π‘š
                     Updating Frequency(Hz)                                5
                     Bias Stability of accelerometer(mg)                   2
                     Velocity random walk (ug/sqrt (Hz))                  120
       MEMS IMU      Bias Stability of gyroscope(deg/h)                    5
                     Angle random walk(deg/sqrt(h))                       0.4
Figure 4: Diagrams for real-time positioning and navigation module, testing vehicles, and equipment
installation

    5.2. Experimental analysis
   To verify the effectiveness of PSO-SVR aided robust EKF model, we design two groups of
experiments below: (1) single point abnormality of GNSS: add outliers of 25 meters, 20 meters, 15
meters, and 10 meters in the East at the time of 192s to compare the robustness of REKF and PSO-
SVR-REKF algorithms in dealing with outliers of different sizes; (2) continuous abnormality of GNSS:
add the outlier of 25 meters in the East during 192s to 202s (lasting for 10s) to compare the robustness
of REKF and PSO-SVR-REKF.
   1. Single point abnormality of GNSS
   Figure 5(a) shows the performance comparison of REKF and EKF after adding the 20-meter outlier
to GNSS. It’s obvious that the 20-meter outlier is a significant outlier and the robust EKF performs 5(a),
the robust EKF will carry out suspicious section to weaken the effect of outliers on results (see Figure
5(b)). If the outlier is within the normal range, EKF will perform standard section to deliver the same
results as obtained by standard EKF (see Figure 5(c)). Hence, robust EKF will improve the accuracy
most significantly by eliminating the abnormal data without the assistance of PSO - SVR. However,
when the outliers gradually reduce, the diagnosis ability of robust EKF will decrease accordingly, which
largely affects the accuracy of results in the end. As can be referred to in Table 3, the robust EKF cannot
resist the effect of outliers when the observation outlier equals 10 meters.




        (a) 20-meter outlier                (b) 15-meter outlier             (c) 10-meter outlier
Figure 5: Comparison of REKF and EKF (with different outliers)
    To verify the effectiveness of the PSO-SVR-aided robust EKF algorithm, we adopt the data with 20-
meter, 15-meter, and 10-meter outliers to train and predict PSO-SVR. The input-out data of the first
150s are taken as PSO-SVR training samples to train the model and predict the position increment after
the 150s as indicated in Figure 6. As can be seen, the abnormality occurs in the position increment of
REKF at the time of 192s, and the prediction increment of PSO-SVR can effectively resist the
abnormality. We replace the original REKF solution increment with prediction increment and restore it
to the position data, the trajectory of which is suggested in Figure 7. Obviously, the PSO-SVR-aided
robust EKF model can further resist the influence of outliers based on the robust solution results.
Referring to Table 3, PSO-SVR-REKF has the highest accuracy, followed by REKF, and the standard
EKF. Compared with EKF and REKF, the PSO-SVR-REKF model significantly improves the accuracy.




         (a) 20-meter outlier                    (b) 15-meter outlier                     (c) 10-meter outlier
 Figure 6: Prediction increment and deviation




         (a) 20-meter outlier                   (b) 15-meter outlier                    (c) 10-meter outlier
 Figure 7: Trajectory prediction
Table 3
Comparison of largest outliers in EKF, REKF, and PSO-SVR-REKF
                     Methods                                                 PSO-SVR-
                                          EKF               REKF
                    Outlier(m)                                               REKF
                       25m                7.210             0.002              0.0018
                       20m                5.620             1.550              0.0017
                       15m                4.440             2.850              0.0019
                       10m                2.980             2.980              0.0018

    2. Continuous abnormality of GNSS
    To investigate the effect of robust EKF when the abnormality continues in GNSS, we add the 25-
meter outlier during 192-201s (lasting for 10s). As a result, the robust EKF performs elimination section
to resist outliers to the greatest extent (see Figure 8). However, when the abnormality continues longer,
affected by the observation covariance matrix of the Kalman filter, the abnormality diagnosis ability of
REKF declines with time, the robustness decreases gradually, and even the results have a divergent
trend (see Figure 9).
Figure 8: The abnormality lasts for 10s in GNSS




Figure 9: The abnormality lasts for 11s in GNSS
   To determine the robustness of PSO-SVR-REKF in the continuous abnormality, according to the
PSO-SVR model training and prediction scheme in accordance with Point (1), the PSO-SVR-REKF
solution results are obtained and then compared with REKF. As pointed out in Figure 11, the maximum
deviation of the REKF solution is 17.35 meters, and the maximum deviation of the PSO-SVR-EKF
solution is 6.18 meters. When the abnormality of GNSS lasts more than 10s, the accuracy of PSO-SVR-
REKF will be improved by 64% compared with REKF.




Figure 10: Prediction increment and deviation when the abnormality lasts for 11s




    Figure 11: Comparison between PSO-SVR-REKF and REKF (when the abnormality lasts for 11s)

6. Conclusion
    After analyzing the performance of robust EKF in dealing with small and continuous outliers, this
paper builds up a PSO-SVR-aided robust EKF algorithm along with its process. The main conclusions
are as follows:
    1) When obvious abnormalities occur in GNSS, REKF can suppress the outliner to the largest extent.
However, the anomaly discrimination ability of robust EKF weakens with the reduction of outliers.
When the outliner of GNSS is equal to or above 10m, REKF will lose its robustness; when the
abnormality lasts over 10s in GNSS (the outliner is 20m), the robustness of REKF will gradually
decrease. In other words, the less significant the outliner becomes, the longer it will last, and the weaker
the robustness of REKF will be.
    2) Whether the abnormality is significant or not, the PSO-SVR-REKF model can resist the outliner
to the largest extent. When the abnormality of GNSS lasts more than 10s, it can also resist the influence
of outliers, which means, the PSO-SVR-REKF algorithm can be less affected by the significance and
duration of abnormal data than REKF, and can resist outliers below 10 meters. When the abnormality
of GNSS continues for 10s, the accuracy will be improved by 64%.

7. References
[1] Zhang, Guohao, and Li-Ta Hsu. Intelligent GNSS/INS integrated navigation system for a
     commercial UAV flight control system. Aerospace Science and Technology 80 (2018): 368-380.
[2] Chen, Chang-Xing, et al. "A hierarchical fault detection method based on LS-SVM in integrated
     navigation system." Sensors & Transducers 175.7 (2014): 111.
[3] Yao, Y.Q.; Xu X.S. A RLS-SVM aided fusion methodology for INS during GPS outages. Sensors
     17.3 (2017): 432.
[4] Dai, H.-F.; Bian, H.-W.; Wang, R.-Y.; Ma, H. An INS/GNSS Integrated Navigation in GNSS
     Denied Environment Using Recurrent Neural Network. Def. Technol. 2020, 16, 334–340.
[5]Doostdar, P.; Keighobadi, J.; Hamed, M.A. INS/GNSS Integration Using Recurrent Fuzzy Wavelet
     Neural Networks. GPS Solut. 2019, 24, 29.
[6] Xiao, Y.; Luo, H.; Zhao, F.; Wu, F.; Gao, X.; Wang, Q.; Cui, L. Residual Attention Network-Based
     Confifidence Estimation Algorithm for Non-Holonomic Constraint in GNSS/INS Integrated
     Navigation System. IEEE Trans. Veh. Technol. 2021, 70, 11404–11418.
[7] Fang, W.; Jiang, J.; Lu, S.; Gong, Y.; Tao, Y.; Tang, Y.; Yan, P.; Luo, H.; Liu, J. A LSTM Algorithm
     Estimating Pseudo Measurements for Aiding INS during GNSS Signal Outages. Remote Sens.
     2020, 12, 256.
[8] Fu, S.Z.; Jiang J.G.; Liu J.H.; et al. A GNSS/INS Vehicle Integrated Navigation System Based on
     LSTM-EKF. Geomatics and Information Science of Wuhan University (2021): 0-0.
[9] Tang, Yanan, et al. A GRU and AKF-Based Hybrid Algorithm for Improving INS/GNSS Navigation
     Accuracy during GNSS Outage. Remote Sensing 14.3 (2022): 752.
[10] Tan X, Wang J, Zhao C. Neural Network Aided Adaptive UKF Algorithm for GPS/INS Integration
     Navigation. Acta Geodaetica et Cartographica Sinica, 2015, 44(4): 384-391.
[11] Tan X, Wang J, Jin S, et al. GA-SVR and pseudo-position-aided GPS/INS integration during GPS
     outage. The Journal of Navigation 68.4 (2015): 678-696.
[12] Liu, Xuming, et al. "Research on adaptive SVR indoor location based on GA optimization."
     Wireless Personal Communications 109.2 (2019): 1095-1120.
[13] Hasanipanah, Mahdi, et al. Prediction of air-overpressure caused by mine blasting using a new
     hybrid PSO–SVR model. Engineering with Computers 33.1 (2017): 23-31.
[14] Belhajem, I.; Yann B.M.; Ahmed T.; A hybrid low cost approach using extended Kalman filter
     and neural networks for real time positioning. 2016 International Conference on Information
     Technology for Organizations Development (IT4OD). IEEE.
[15] Kim, Y.; An, J.; Lee, J. Robust Navigational System for a Transporter Using GPS/INS Fusion.
     IEEE Trans. Ind. Electron. 2018, 65, 3346–3354.