=Paper= {{Paper |id=Vol-3922/paper10 |storemode=property |title=A New Subband Set-Membership Fast NLMS (SB-SMFNLMS) Adaptive Algorithm |pdfUrl=https://ceur-ws.org/Vol-3922/paper10.pdf |volume=Vol-3922 |authors=Mohamed Zerouali,Mohamed Djendi |dblpUrl=https://dblp.org/rec/conf/iam/ZeroualiD24 }} ==A New Subband Set-Membership Fast NLMS (SB-SMFNLMS) Adaptive Algorithm== https://ceur-ws.org/Vol-3922/paper10.pdf
                         A New Subband Set-Membership Fast NLMS
                         (SB-SM-FNLMS) Adaptive Algorithm
                         Mohamed ZEROUALI1,*,† , Mohamed DJENDI1,*,†
                         1
                             University of Blida 1, Blida, Algeria


                                         Abstract
                                         This study introduces a novel Subband Set-Membership Fast Normalized Least Mean Square (SB-SM-FNLMS)
                                         adaptive filtering algorithm. By integrating the subband adaptive filtering approach into the Set-Membership Fast
                                         Normalized Least Mean Square (SM-FNLMS) algorithm, the convergence rate, final mean square error (MSE) and
                                         computational complexity (CC) are improved. A performance comparison, based on learning curve (Mean Square
                                         Error (MSE) plot), between the proposed SB-SM-FNLMS algorithm and the existing Normalized Least Mean
                                         Square (NLMS), Set-Membership Normalized Least Mean Square (NLMS), Fast Normalized Least Mean Square
                                         (FNLMS), and Set-Membership Fast Normalized Least Mean Square (SM-FNLMS) algorithms, demonstrates the
                                         superior performances of the proposed algorithm.

                                         Keywords
                                         NLM, FNLMS, SM, SB, SM-FNLMS, SB-SM-FNLMS, SegMSE, CC




                         1. Introduction
                         In modern communication systems, such as hands-free telephony and audio teleconferencing, adaptive
                         filtering plays an important role, particularly in applications like acoustic echo cancellation (AEC) and
                         noise reduction (NR). Adaptive filtering adjusts filter coefficients in real-time, making it highly effective
                         in non-stationary environments. Several reduced-complexity adaptive algorithms have been proposed
                         in the literature, including partial update techniques [1], where only a subset of filter taps is updated
                         during each iteration [1, 2]. Set-membership algorithms have also been introduced as an alternative,
                         utilizing specific time-update instances to reduce overall computational complexity (CC) [3, 4]. A
                         compromise between partial updating and set-membership NLMS algorithms has been proposed to
                         further reduce CC [2, 5, 6]. Recently, a Set-Membership Fast NLMS [4] (SM-FNLMS) algorithm was
                         developed. The FNLMS algorithm [7] uses the decorrelation properties to improve convergence speed by
                         estimating the first forward predictor coefficient. When combined with the set-membership approach,
                         this improves both computational complexity and convergence rate.
                             In this work, we develop a subband (SB) approach for the SM-FNLMS algorithm, where a set-
                         membership adaptive filtering technique is applied in each subband. This approach offers two key
                         advantages: subband filtering enhances the convergence rate, while incorporating set membership in
                         each subband reduces the frequency updating which leads to reduce the computational complexity
                         compared to the original SM-FNLMS algorithm [4]. The performance of the proposed algorithm is
                         evaluated based on mean square error (MSE) and the overall computational complexity required in
                         simulation time. The structure of this paper is as follows: Section 1 discusses the adaptive filtering
                         problem. Section 2 introduces the NLMS, FNLMS [7], and SM-FNLMS [4] algorithms. In Section 3, we
                         present the derivation of the proposed subband SM-FNLMS (SB-SM-FNLMS) algorithm. Simulation
                         results, in terms of MSE and computational complexity, are provided in Section 4. Finally, Section 5
                         concludes the paper.



                         7th International Conference on Informatics and Applied Mathematics IAM’24, December 4-5, 2024, Guelma, Algeria
                         *
                           Corresponding author.
                         †
                           These authors contributed equally.
                         $ zerouali.med@yahoo.com (M. ZEROUALI); m_djendi@yahoo.fr (M. DJENDI)
                                        © 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).


CEUR
                  ceur-ws.org
Workshop      ISSN 1613-0073
Proceedings
2. Adaptive Filtering
The principle of adaptive filtering is illustrated in Figure 1 It involves processing an input signal x(n)
to generate, at each time instant, an output signal y(n) such that the difference between the desired
response d(n) and the estimated response y(n) is minimized. This minimization is achieved by updating
the coefficients (weights) of the adaptive filter w at time n, using the latest data set, which includes the
desired signal d(n), the input signal x(n), and the a priori filtering error defined as follows:

                                         𝑒(𝑛) = 𝑑(𝑛) − w𝑇 (𝑛 − 1)x(𝑛)                                     (1)

  Here the input signal vector x(n) represents the M last simples of the input signal at instant n, and
the filter vector w(n) represents the M adjusted coefficients at time instant n. These two vectors are
defined as follows:
                                                                   ]︀𝑇
                                                                                                          (2)
                                  [︀
                            x(𝑛) = 𝑥(𝑛) 𝑥(𝑛 − 1) . . . 𝑥(𝑛 − 𝑀 − 1)

                                                                    ]︀𝑇
                                                                                                          (3)
                                     [︀
                               w(𝑛) = 𝑤0 (𝑛) 𝑤1 (𝑛) . . . 𝑤𝑀 −1 (𝑛)
   In most common cases, the desired signal d(n), is correlated with the input signal x(n), as it is obtained
using a linear transformation of the input signal (e.g., in cases of acoustic echo cancellation, adaptive
filter identification, and adaptive noise cancellation). The adaptive algorithm iteratively minimizes
the mean square error E[e(n)]at each time step, using the previous estimate of w(n-1) and the new
correction term G(n)e(n).

                                         w(𝑛 + 1) = w(𝑛) + G(𝑛)𝑒(𝑛)                                       (4)
  Where G(n) represents the adaptation gain.




Figure 1: Adaptive filtering principle




3. Adaptive filtering algorithms
3.1. NLMS Algorithm
The adaptation gain G(n) can be computed in different ways by various algorithms. In the normalized
least mean squares (NLMS) algorithm, it is calculated by minimizing the mean squared error (MSE) and
is defined as:

                                         G(𝑛) = 𝜇NLMS x(𝑛)/x𝑇 (𝑛)x(𝑛)                                     (5)
   Here, 𝜇_NLMS serves as the step size parameter, controlling the convergence behavior of the NLMS
algorithm and is bounded by 0<𝜇_NLMS<2 . As a result, the adaptive filter’s weights are updated
through the recursive equation:

                          w(𝑛 + 1) = w(𝑛) + 𝜇NLMS x(𝑛)𝑒(𝑛)/x𝑇 (𝑛)x(𝑛)                                 (6)
   The NLMS algorithm has a computational complexity of 3M+1 multiplications and 1 division per
iteration, making it feasible for implementation in real-time applications with limited computational
resources. Algorithm 1 provides a summary of the NLMS algorithm.

Algorithm 1 NLMS Algorithm
  𝑒(𝑛) = 𝑑(𝑛) − w𝑇 (𝑛 − 1)x(𝑛)
                          𝜇NLMS
  w(𝑛) = w(𝑛 − 1) + x𝑇 (𝑛)x(𝑛)+𝛿 NLMS
                                      x(𝑛)𝑒(𝑛)


  Where 𝛿NLMS is a small constant to avoid division by zero.

3.2. FNLMS Algorithm
The Fast Recursive Least Squares (FRLS) Algorithm is an efficient alternative computational method
to the Recursive Least Squares (RLS) algorithm. The FRLS algorithm minimizes the sum of squared
errors with an exponential forgetting factor 𝜆 by calculating the dual Kalman gain k(n) of the RLS
algorithm, introducing forward and backward predictor vectors a(n) and b(n),. In the FRLS algorithm,
the adaptation vector is defined as follows:

                                           𝐺(𝑛) = 𝛾(𝑛)𝑘(𝑛)                                            (7)

   where 𝛾(𝑛) is called the likelihood factor. This algorithm provides a high convergence speed rate
compared to the NLMS algorithm. However, the CC of this algorithm remains high in comparison with
that of NLMS.
   A more recent approach, the fast-convergence NLMS (FNLMS) algorithm [7], further reduces compu-
tational complexity by simplifying the adaptation gain of the FRLS algorithm, achieving a computational
cost similar to that of the NLMS algorithm while maintaining a convergence rate close to that of the
FRLS algorithm. In the FNLMS algorithm, the adaptation gain is computed using the following recursive
equation:
                                        [︂    ]︂ [︃ −𝑒¯(𝑛) ]︃
                                          𝑘(𝑛)
                                                = 𝜆𝑝(𝑛−1)+𝑐0                                        (8)
                                          𝑘(𝑛)       𝑘(𝑛 − 1)
  𝑐0 is a small constant included to prevent division by zero. The forward prediction error ¯𝑒(𝑛) and its
variance p(n) are calculated using the following equations:

                                     ¯𝑒(𝑛) = 𝑥(𝑛) + 𝑎(𝑛)𝑥(𝑛 − 1)                                      (9)

                                      𝑝(𝑛) = 𝜆𝑝(𝑛 − 1) + ¯𝑒2 (𝑛)                                     (10)
  The predictor 𝑎(𝑛) is estimated by using the auto-correlation coefficients 𝑟0 (𝑛) and 𝑟1 (𝑛):

                                          𝑎(𝑛) = 𝑟1 (𝑛)/𝑟0 (𝑛)                                       (11)
  these autocorrelation coefficients are recursively updated as follows:

                                   𝑟0 (𝑛) = 𝜆𝑎 𝑟0 (𝑛 − 1) + 𝑥(𝑛)𝑥(𝑛)                                 (12)

                                𝑟1 (𝑛) = 𝜆𝑎 𝑟1 (𝑛 − 1) + 𝑥(𝑛)𝑥(𝑛 − 1)                                (13)
  Where 𝜆𝑎 is the forgetting factor. The conversion factor 𝛾(𝑛) is updated using the following recursive
equation:

                                                   𝛾(𝑛 − 1)
                                     𝛾(𝑛) =                                                          (14)
                                              1 + 𝛾(𝑛 − 1) + 𝛽(𝑛)
  Where 𝛽(𝑛) is computed as follows:

                                                       𝑥(𝑛)𝑒¯(𝑛)
                            𝛽(𝑛) = 𝑘(𝑛)𝑥(𝑛 − 𝑀 ) +                                                   (15)
                                                  1 + 𝜆𝑝(𝑛 − 1) + 𝑐0
  The NLMS algorithm is summarized in algorithm 2:

Algorithm 2 FNLMS algorithm
  𝑘(0) = 0, 𝛾(0) = 1, 𝑟1 (0) = 0, 𝑟0 (0) = 1, 𝑝(0) = 1
  0.9 < 𝜆 < 1, 0.9 < 𝜆𝑎 < 1, 𝑐𝑎 = 𝑐0 (small constant)

   𝑟0 (𝑛) = 𝜆𝑎 𝑟0 (𝑛 − 1) + 𝑥(𝑛)𝑥(𝑛)
   𝑟1 (𝑛) = 𝜆𝑎 𝑟1 (𝑛 − 1) + 𝑥(𝑛)𝑥(𝑛 − 1)
   𝑎(𝑛) = 𝑟0𝑟(𝑛)+𝑐
              1 (𝑛)
                    𝑎
  ¯𝑒(𝑛) = 𝑥(𝑛) + 𝑎(𝑛)𝑥(𝑛 − 1)
   𝑝(𝑛) = 𝜆𝑝(𝑛    − 1) + ¯𝑒2 (𝑛)
   [︂     ]︂ [︃ −𝑒¯(𝑛) ]︃
      𝑘(𝑛)
            = 𝜆𝑝(𝑛−1)+𝑐0
      𝑘(𝑛)        𝑘(𝑛 − 1)
                           𝑥(𝑛)𝑒
                               ¯(𝑛)
  𝛽(𝑛) = 𝑘(𝑛)𝑥(𝑛 − 𝑀 ) + 1+𝜆𝑝(𝑛−1)+𝑐 0
             𝛾(𝑛−1)
  𝛾(𝑛) = 1+𝛾(𝑛−1)+𝛽(𝑛)
  Filtering:
  𝑒(𝑛) = 𝑑(𝑛) − w𝑇 (𝑛 − 1)𝑥(𝑛)
  w(𝑛) = w(𝑛 − 1) − 𝜇FNLMS 𝛾(𝑛)𝑘(𝑛)𝑒(𝑛)



3.3. Set-Membership FNLMS (SM-FNLMS) algorithm
The main strategy of the SM-FNLMS algorithm is to conduct a verification step that checks whether the
prior estimate vector w(𝑛 − 1) falls outside the constraint set Ψ. This set includes all vectors w for
which the corresponding output error 𝑒(𝑛) at time 𝑛 remains within a specified upper limit, denoted by
𝜁:

                               Ψ = w ∈ R𝑀 : |𝑑(𝑛) − w𝑇 x(𝑛)| < 𝜁                                     (16)
                                  {︀                             }︀

   A recursive algorithm with a priori error 𝑒(𝑛) testing can be used to converge the filter w(𝑛) to the
set of filter solutions defined by equation (16). The recursive updating equation is given below:

                               w(𝑛) = w(𝑛 − 1) − 𝜇(𝑛)𝛾(𝑛)𝑘(𝑛)𝑒(𝑛)                                    (17)
  Where:
                                           {︃
                                                    𝜁
                                             1 − |𝑒(𝑛)|   if |𝑒(𝑛)| > 𝜁
                                  𝜇(𝑛) =                                                             (18)
                                              0           otherwise
   Clearly, when |𝑒(𝑛)| ≤ 𝜁, the step-size value will be 𝜇(𝑛) = 0, and consequently, w(𝑛) = w(𝑛 − 1),
resulting in no update of the filter. This provides a benefit in terms of the computational complexity of
the overall update time. Additionally, the SM-FNLMS algorithm employs a variable step-size, which can
achieve good convergence with an optimal MSE steady-state. The SM-FNLMS algorithm is summarized
in algorithm 3.
Algorithm 3 SM-FNLMS Algorithm
  𝑘(0) = 0, 𝛾(0) = 1, 𝑟1 (0) = 0, 𝑟0 (0) = 1, 𝑝(0) = 1
  0.9 < 𝜆 < 1, 0.9 < 𝜆𝑎 < 1, 𝑐𝑎 = 𝑐0 (small constant)

   𝑟0 (𝑛) = 𝜆𝑎 𝑟0 (𝑛 − 1) + 𝑥(𝑛)𝑥(𝑛)
   𝑟1 (𝑛) = 𝜆𝑎 𝑟1 (𝑛 − 1) + 𝑥(𝑛)𝑥(𝑛 − 1)
   𝑎(𝑛) = 𝑟0𝑟(𝑛)+𝑐
              1 (𝑛)
                    𝑎
  ¯𝑒(𝑛) = 𝑥(𝑛) + 𝑎(𝑛)𝑥(𝑛 − 1)
   𝑝(𝑛) = 𝜆𝑝(𝑛    − 1) + ¯𝑒2 (𝑛)
   [︂     ]︂ [︃ −𝑒¯(𝑛) ]︃
      𝑘(𝑛)
            = 𝜆𝑝(𝑛−1)+𝑐0
      𝑘(𝑛)        𝑘(𝑛 − 1)
                           𝑥(𝑛)𝑒
                               ¯(𝑛)
  𝛽(𝑛) = 𝑘(𝑛)𝑥(𝑛 − 𝑀 ) + 1+𝜆𝑝(𝑛−1)+𝑐 0
              𝛾(𝑛−1)
  𝛾(𝑛) = 1+𝛾(𝑛−1)+𝛽(𝑛)
  Filtering:
  𝑒(𝑛) = 𝑑(𝑛) − w𝑇 (𝑛 − 1)𝑥(𝑛)
  if |𝑒(𝑛)| > 𝜁 then
                    𝜁
      𝜇(𝑛) = 1 − |𝑒(𝑛)|
      w(𝑛) = w(𝑛 − 1) − 𝜇(𝑛)𝛾(𝑛)𝑘(𝑛)𝑒(𝑛)
  else
      w(𝑛) = w(𝑛 − 1)
  end if


4. Proposed Algorithm
Consider the critically subband adaptive filtering (𝐷 = 𝑁 ) given in Figure 11 [8]. Before filtering, the
input signals 𝑥(𝑛) and 𝑑(𝑛) are analyzed using the analysis filters 𝐹𝑗 to generate subband signals 𝑥𝑖 (𝑛)
and 𝑑𝑖 (𝑛). The desired signals 𝑑𝑖 (𝑛) and the output signals of the filter 𝑦𝑖 (𝑛) are then decimated with
a factor 𝐷, to provide low-time-rate signals 𝑑𝑖 (𝑘) and 𝑦𝑖 (𝑘), producing low-time-rate subband errors
𝑒𝑗 (𝑘).
   Note that the subband and decimated signals are referred to by the indices 𝑗 and 𝐷. In our algorithm,
in each subband, we calculate the prediction parameters using the corresponding input subband signal
𝑥𝑖 (𝑛), following the same strategy as the fullband FNLMS.
   The proposed algorithm is based on subband set-membership, where the adaptive filters 𝑤(𝑘 + 1)
belong to the set of filters Ψ:

                                             𝑤(𝑘 + 1) ∈ Ψ                                            (19)
   The proposed algorithm uses subband adaptive updating, so we define the set of filters Ψ as the
intersection of the sets of subband filters 𝜓𝑗 , as follows:

                                       Ψ = 𝜓1 ∩ 𝜓2 ∩ · · · ∩ 𝜓𝑁                                      (20)
  Each subband filter set 𝜓𝑗 is defined in low sampling time 𝑘 as follows:

                              𝜓𝑗 = {𝑤 ∈ R𝑀 : |𝑑𝐷,𝑗 (𝑘) − 𝑥𝑇𝑗 (𝑘)𝑤| < 𝜁}                              (21)
  As with the full-band SM-FNLMS algorithm, we can converge the adaptive filter inside the set Ψ.
However, here, 𝑁 conditions are imposed on the subband errors. The recursive updating equation is
given by:
                                                   𝑁
                                                  ∑︁
                            𝑤(𝑘) = 𝑤(𝑘 − 1) −           𝜇𝑗 (𝑘)𝛾𝑗 (𝑘)𝑘𝑗 (𝑘)𝑒𝑗 (𝑘)                     (22)
                                                  𝑗=1
  where:
                                            {︃
                                              1 − |𝑒𝑗 𝜁(𝑘)| , if |𝑒𝑗 (𝑘)| > 𝜁
                                 𝜇𝑗 (𝑘) =                                                         (23)
                                               0,            otherwise




Figure 2: Subband adaptive filtering



Algorithm 4 Proposed SB-SM-FNLMS Algorithm
   Initialization:
   𝑘𝑗 (0) = 0, 𝛾𝑗 (0) = 1, 𝑟𝑗,1 (0) = 0, 𝑟𝑗,0 (0) = 1, 𝑝𝑗 (0) = 1
   0.9 < 𝜆 < 1, 0.9 < 𝜆𝑎 < 1, 𝑐𝑎 = 𝑐0 (small constant)
   𝑟𝑗,0 (𝑛) = 𝜆𝑎 𝑟𝑗,0 (𝑛 − 1) + 𝑥𝑗 (𝑛)𝑥𝑗 (𝑛)
   𝑟𝑗,1 (𝑛) = 𝜆𝑎 𝑟𝑗,1 (𝑛 − 1) + 𝑥𝑗 (𝑛)𝑥𝑗 (𝑛 − 1)
               𝑟 (𝑛)
   𝑎𝑗 (𝑛) = 𝑟𝑗,0𝑗,1
                 (𝑛)+𝑐𝑎
  ¯𝑒𝑗 (𝑛) = 𝑥𝑗 (𝑛) + 𝑎𝑗 (𝑛)𝑥𝑗 (𝑛 − 1)
   𝑝𝑗 (𝑛) = 𝜆𝑝𝑗 (𝑛 − 1) + ¯𝑒2𝑗 (𝑛)
   [︂       ]︂ [︃ −𝑒¯𝑗 (𝑛) ]︃
      𝑘𝑗 (𝑛)
              = 𝜆𝑝𝑗 (𝑛−1)+𝑐0
      𝑘𝑗 (𝑛)        𝑘𝑗 (𝑛 − 1)
                                  𝑥 (𝑛)𝑒
                                       ¯ (𝑛)
  𝛽𝑗 (𝑛) = 𝑘𝑗 (𝑛)𝑥𝑗 (𝑛 − 𝑀 ) + 𝜆𝑝𝑗𝑗(𝑛−1)+𝑐
                                      𝑗
                                           0
                𝛾 (𝑛−1)
                  𝑗
  𝛾𝑗 (𝑛) = 1+𝛾𝑗 (𝑛−1)+𝛽   𝑗 (𝑛)
  Filtering:
  𝑒𝑗 (𝑘) = 𝑑𝑗 (𝑘) − 𝑤(𝑘 − 1)𝑇 𝑥𝑗 (𝑘)
  if |𝑒𝑗 (𝑘)| > 𝜁 then
      𝜇𝑗 (𝑘) = 1 − |𝑒𝑗 𝜁(𝑘)|
      𝑤𝑗 (𝑘) = 𝑤𝑗 (𝑘 − 1) − 𝜇𝑗 (𝑘)𝛾𝑗 (𝑘)𝑘𝑗 (𝑘)𝑒𝑗 (𝑘)
  else
      𝑤𝑗 (𝑘) = 𝑤𝑗 (𝑘 − 1)
  end if


5. Simulation
In this section, we evaluate the performance of the proposed SB-SM-FNLMS algorithm in terms of
convergence rate (MSE learning curve) and computational complexity (CC). The input signal is a colored
autoregressive signal generated by an autoregressive model:

               𝑥(𝑛) = −0.8650 𝑥(𝑛 − 1) − 0.8066 𝑥(𝑛 − 2) − 0.7703 𝑥(𝑛 − 3) + 𝑣(𝑛)                 (24)
   Here, v(n) represents Gaussian white noise with variance 𝜎𝑣2 , adjusted to ensure 𝜎𝑥2 = 1. The desired
signal is derived from filtering the input signal using an impulse response of 512 samples, as illustrated
in Figure 3. To simulate various perturbations, white noise with a variance of 𝜎𝜂2 = 0.01 is added
to the desired signal. The parameters of each algorithm are adjusted for optimal convergence rates,
with step sizes set to 1. For set-membership algorithms
                                                     √︁ (i.e. SM-NLMS, SM-FNLMS, and the proposed
SB-SM-FNLMS), the error bound is defined as 𝜁 = 5𝜎𝜂2 = 0.223. This simulation aims to evaluate the
behavior of the proposed algorithms in addressing the impulse response identification problem. The
learning curve simulation involves calculating the segmental mean square error (Seg MSE), as described
by the relation below:
                                                   𝐵−1
                                                   ∑︁
                                    SegMSEdB =           10 log10 𝑒2 (𝑛)                             (25)
                                                                 [︀     ]︀

                                                   𝑖=0

  Where B is the block length, and in this simulation, B is set to 400 samples. To generate the subband
input signals x_j (k) and subband desired signals d_j (k), we use analysis and synthesis FIR filters with
32 taps. The frequency responses of these filters for two subbands decompositions (N=2 and N=4) is
shown in Figure 4.




Figure 3: Impulse Response




Figure 4: Frequency responses of analyses and synthesists FIR filters of 32 samples



5.1. Learning curve
In this experiment, we evaluate the performance of the proposed SB-SM-FNLMS algorithm in comparison
with the NLMS, SM-NLMS, FNLMS, and SM-FNLMS algorithms using the SegMSE criterion. We
consider 2, 3, and 4 subband decompositions with critical decimation for our proposed algorithm.
The obtained results are shown in Figure 5. Based on this figure, we observe a higher convergence
rate with the proposed algorithm compared to all other algorithms, especially for higher numbers of
subband decompositions (N), which is due to the decorrelation introduced by subband decomposition.
Additionally, a lower steady-state MSE is achieved by the proposed algorithm, which is due to the
minimization of subband power errors, leading to a lower fullband power error.




                                                                                                  √︁
Figure 5: The learning curve is defined with 𝜆𝑎 = 𝜆 = 0.94, 𝑐0 = 𝑐𝑎 = 0.01, 𝜎𝜂2 = 0.01, and 𝜁 =     5𝜎𝜂2 = 0.223.
The step size of all algorithms is fixed to 1.



5.2. Computational complexity
0 presents the computational complexity (CC) in terms of the number of multiplications and divisions
required for one iteration of the four algorithms (i.e., NLMS, SM-NLMS, FNLMS, SM-FNLMS, and the
proposed SB-SM-FNLMS algorithms). With critical decimation (D=N), the proposed algorithm requires
2M+12N+2 multiplication and 1+4N divisions per iteration, and for 𝑀 ≫ 𝑁 , the CC of the proposed
algorithm is close to that of the SM-FNLMS algorithm. However, since the adaptive filter is updated
based on the low time-rate k, the overall time CC of the proposed algorithm can be significantly lower
than that of the SM-FNLMS. Figure 6 presents the ON/OFF updating filter at each time instant for our
simulation. We set N=4, and the updating filter based on the proposed algorithm operates on four
subbands with a low time rate.
   As shown in Fig 6, the update frequency obtained by the proposed algorithm is less dense compared
to the other algorithms. The total number of updates obtained for the learning curve in Figure 5 is
provided in Table 2. Based on this Table, we observe that, the number of updates is approximately
one-fifth that of the SM-FNLMS and SM-NLMS algorithms. This result demonstrates the superior
performance of the proposed algorithm in terms of computational complexity (CC).

            Algorithm                         Multiplications                   Divisions
            NLMS                                  3M + 1                            1
            SM-NLMS                               3M + 1                            2
            FNLMS                                2M + 14                            4
            SM-FNLMS                             2M + 14                            5
            Proposed SB-SM-FNLMS           (2+M)N/D+MN/D+12N                    N/D+4N
Table 1
Computational complexity per iteration
            Algorithm                                          Total number of updates
            NLMS                                                       100000
            SM-NLMS                                                     56361
            FNLMS                                                      100000
            SM-FNLMS                                                    48936
            Proposed SB-SM-FNLMS                                        10517
Table 2
Total number of updates




Figure 6: Obtained frequency updating for each set-membership algorithm


6. Conclusion
We have introduced in this work a new subband set-membership-based Fast Normalized Least Mean
Square (SB-SM-FNLMS) algorithm. By incorporating subband filtering, the proposed algorithm improves
both convergence rate and computational complexity (CC). Simulation results demonstrate its superior
performances compared to the existing NLMS, SM-NLMS, FNLMS, and SM-FNLMS algorithms in term
of convergence speed rate, final mean square error (MSE) and computational complexity (CC) , making
it well-suited for practical adaptive filtering applications, including acoustic echo cancellation, adaptive
noise reduction, etc.
Declaration on Generative AI
During the preparation of this work, the author used ChatGPT, Grammarly in order to: Grammar and
spelling check, Paraphrase and reword. After using this tool, the author reviewed and edited the content
as needed and takes full responsibility for the publication’s content.


References
[1] S. C. Douglas, Adaptive filters employing partial updates, IEEE Transactions on Circuits and
    Systems II: Analog and Digital Signal Processing 44 (1997) 209–216.
[2] S. Werner, M. L. De Campos, P. S. Diniz, Partial-update nlms algorithms with data-selective updating,
    IEEE Transactions on Signal Processing 52 (2004) 938–949.
[3] S. Gollamudi, S. Nagaraj, S. Kapoor, Y.-F. Huang, Set-membership filtering and a set-membership
    normalized lms algorithm with an adaptive step size, IEEE Signal Processing Letters 5 (1998)
    111–114.
[4] I. Hassani, M. Arezki, A. Benallal, Set-membership fast-nlms algorithm for acoustic echo cancellation,
    in: 2018 International Conference on Signal, Image, Vision and their Applications (SIVA), IEEE,
    2018, pp. 1–5.
[5] M. S. E. Abadi, F. Moradiani, A unified approach to tracking performance analysis of the selective
    partial update adaptive filter algorithms in nonstationary environment, Digital Signal Processing
    23 (2013) 817–830.
[6] A. Cheffi, M. Djendi, A. Guessoum, New efficient two channel forward set-membership partial-
    update nlms algorithms for blind speech enhancement and acoustic noise reduction, Applied
    Acoustics 141 (2018) 322–332.
[7] A. Benallal, M. Arezki, A fast convergence normalized least-mean-square type algorithm for adaptive
    filtering, International Journal of Adaptive Control and Signal Processing 28 (2014) 1073–1080.
[8] K.-A. Lee, W.-S. Gan, Improving convergence of the nlms algorithm using constrained subband
    updates, IEEE signal processing letters 11 (2004) 736–739.