<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>A New Subband Set-Membership Fast NLMS (SB-SM-FNLMS) Adaptive Algorithm</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Mohamed ZEROUALI</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mohamed DJENDI</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Keywords NLM</institution>
          ,
          <addr-line>FNLMS, SM, SB, SM-FNLMS, SB-SM-FNLMS, SegMSE, CC</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Blida 1</institution>
          ,
          <addr-line>Blida</addr-line>
          ,
          <country country="DZ">Algeria</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>This study introduces a novel Subband Set-Membership Fast Normalized Least Mean Square (SB-SM-FNLMS) adaptive filtering algorithm. By integrating the subband adaptive filtering approach into the Set-Membership Fast Normalized Least Mean Square (SM-FNLMS) algorithm, the convergence rate, final mean square error (MSE) and computational complexity (CC) are improved. A performance comparison, based on learning curve (Mean Square Error (MSE) plot), between the proposed SB-SM-FNLMS algorithm and the existing Normalized Least Mean Square (NLMS), Set-Membership Normalized Least Mean Square (NLMS), Fast Normalized Least Mean Square (FNLMS), and Set-Membership Fast Normalized Least Mean Square (SM-FNLMS) algorithms, demonstrates the superior performances of the proposed algorithm.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
    </sec>
    <sec id="sec-2">
      <title>2. Adaptive Filtering</title>
      <p>The principle of adaptive filtering is illustrated in Figure 1 It involves processing an input signal x(n)
to generate, at each time instant, an output signal y(n) such that the diference between the desired
response d(n) and the estimated response y(n) is minimized. This minimization is achieved by updating
the coeficients (weights) of the adaptive filter w at time n, using the latest data set, which includes the
desired signal d(n), the input signal x(n), and the a priori filtering error defined as follows:</p>
      <p>Here the input signal vector x(n) represents the M last simples of the input signal at instant n, and
the filter vector w(n) represents the M adjusted coeficients at time instant n. These two vectors are
defined as follows:
x() = [︀ () ( − 1) . . . ( −  − 1)]︀</p>
      <p>w() = [︀ 0() 1() . . . − 1()]︀</p>
      <p>In most common cases, the desired signal d(n), is correlated with the input signal x(n), as it is obtained
using a linear transformation of the input signal (e.g., in cases of acoustic echo cancellation, adaptive
iflter identification, and adaptive noise cancellation). The adaptive algorithm iteratively minimizes
the mean square error E[e(n)]at each time step, using the previous estimate of w(n-1) and the new
correction term G(n)e(n).</p>
      <p>w( + 1) = w() + G()()</p>
      <sec id="sec-2-1">
        <title>Where G(n) represents the adaptation gain.</title>
        <p>
          (
          <xref ref-type="bibr" rid="ref1">1</xref>
          )
(
          <xref ref-type="bibr" rid="ref2">2</xref>
          )
(
          <xref ref-type="bibr" rid="ref3">3</xref>
          )
(
          <xref ref-type="bibr" rid="ref4">4</xref>
          )
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Adaptive filtering algorithms</title>
      <sec id="sec-3-1">
        <title>3.1. NLMS Algorithm</title>
        <p>The adaptation gain G(n) can be computed in diferent ways by various algorithms. In the normalized
least mean squares (NLMS) algorithm, it is calculated by minimizing the mean squared error (MSE) and
is defined as:</p>
        <p>
          G() =  NLMSx()/x ()x()
(
          <xref ref-type="bibr" rid="ref5">5</xref>
          )
        </p>
        <p>
          Here,  _NLMS serves as the step size parameter, controlling the convergence behavior of the NLMS
algorithm and is bounded by 0&lt; _NLMS&lt;2 . As a result, the adaptive filter’s weights are updated
through the recursive equation:
w( + 1) = w() +  NLMSx()()/x ()x()
(
          <xref ref-type="bibr" rid="ref6">6</xref>
          )
        </p>
        <p>The NLMS algorithm has a computational complexity of 3M+1 multiplications and 1 division per
iteration, making it feasible for implementation in real-time applications with limited computational
resources. Algorithm 1 provides a summary of the NLMS algorithm.</p>
        <sec id="sec-3-1-1">
          <title>Algorithm 1 NLMS Algorithm</title>
          <p>() = () − w ( − 1)x()
w() = w( − 1) + x ()xN(LM)+S NLMS x()()</p>
          <p>Where  NLMS is a small constant to avoid division by zero.</p>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. FNLMS Algorithm</title>
        <p>The Fast Recursive Least Squares (FRLS) Algorithm is an eficient alternative computational method
to the Recursive Least Squares (RLS) algorithm. The FRLS algorithm minimizes the sum of squared
errors with an exponential forgetting factor  by calculating the dual Kalman gain k(n) of the RLS
algorithm, introducing forward and backward predictor vectors a(n) and b(n),. In the FRLS algorithm,
the adaptation vector is defined as follows:</p>
        <p>() =  ()()
where  () is called the likelihood factor. This algorithm provides a high convergence speed rate
compared to the NLMS algorithm. However, the CC of this algorithm remains high in comparison with
that of NLMS.</p>
        <p>A more recent approach, the fast-convergence NLMS (FNLMS) algorithm [7], further reduces
computational complexity by simplifying the adaptation gain of the FRLS algorithm, achieving a computational
cost similar to that of the NLMS algorithm while maintaining a convergence rate close to that of the
FRLS algorithm. In the FNLMS algorithm, the adaptation gain is computed using the following recursive
equation:</p>
        <p>0 is a small constant included to prevent division by zero. The forward prediction error ¯() and its
variance p(n) are calculated using the following equations:</p>
        <p>The predictor () is estimated by using the auto-correlation coeficients 0() and 1():
these autocorrelation coeficients are recursively updated as follows:
︂[ ()]︂
()</p>
        <p>[︃ − ¯() ]︃
=  (− 1)+0</p>
        <p>( − 1)
¯() = () + ()( − 1)
() =  ( − 1) + ¯2()</p>
        <p>
          () = 1()/0()
0() =  0( − 1) + ()()
1() =  1( − 1) + ()( − 1)
(
          <xref ref-type="bibr" rid="ref7">7</xref>
          )
(
          <xref ref-type="bibr" rid="ref8">8</xref>
          )
(9)
(10)
(11)
(12)
        </p>
        <p>Where   is the forgetting factor. The conversion factor  () is updated using the following recursive
equation:</p>
        <p>Where  () is computed as follows:
 () =</p>
        <p>( − 1)
1 +  ( − 1) +  ()
 () = ()( −  ) +</p>
        <p>()¯()
1 +  ( − 1) + 0</p>
        <p>The NLMS algorithm is summarized in algorithm 2:
Algorithm 2 FNLMS algorithm
(0) = 0,  (0) = 1, 1(0) = 0, 0(0) = 1, (0) = 1
0() =  0( − 1) + ()()
1() =  1( − 1) + ()( − 1)
() = 0()+
w() = w( − 1) −  FNLMS ()()()
prior estimate vector w( −
 :</p>
        <sec id="sec-3-2-1">
          <title>Where:</title>
        </sec>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Set-Membership FNLMS (SM-FNLMS) algorithm</title>
        <p>The main strategy of the SM-FNLMS algorithm is to conduct a verification step that checks whether the
which the corresponding output error () at time  remains within a specified upper limit, denoted by
1) falls outside the constraint set Ψ. This set includes all vectors w for
Ψ = {︀ w ∈ R : |() −
w x()| &lt;  }︀
A recursive algorithm with a priori error () testing can be used to converge the filter
w() to the
set of filter solutions defined by equation (16). The recursive updating equation is given below:
w() = w( − 1) −  () ()()()
 () =
{︃1
0</p>
        <p>resulting in no update of the filter. This provides a benefit in terms of the computational complexity of</p>
        <p>Clearly, when |()| ≤  , the step-size value will be  () = 0, and consequently, w() = w( − 1),
the overall update time. Additionally, the SM-FNLMS algorithm employs a variable step-size, which can
achieve good convergence with an optimal MSE steady-state. The SM-FNLMS algorithm is summarized
in algorithm 3.
(14)
(15)
(16)
(17)
(18)
Algorithm 3 SM-FNLMS Algorithm
(0) = 0,  (0) = 1, 1(0) = 0, 0(0) = 1, (0) = 1
() = 0()+
1() =  1( − 1) + ()( − 1)
w() = w( − 1)
else
(19)
(20)
(21)
(22)</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Proposed Algorithm</title>
      <p>Consider the critically subband adaptive filtering (  =  ) given in Figure 11 [8]. Before filtering, the
input signals () and () are analyzed using the analysis filters  to generate subband signals ()
and (). The desired signals () and the output signals of the filter () are then decimated with
a factor , to provide low-time-rate signals () and (), producing low-time-rate subband errors
 ().</p>
      <p>Note that the subband and decimated signals are referred to by the indices  and . In our algorithm,
in each subband, we calculate the prediction parameters using the corresponding input subband signal
(), following the same strategy as the fullband FNLMS.</p>
      <p>The proposed algorithm is based on subband set-membership, where the adaptive filters ( + 1)
belong to the set of filters Ψ:</p>
      <p>The proposed algorithm uses subband adaptive updating, so we define the set of filters
Ψ as the
intersection of the sets of subband filters   , as follows:</p>
      <p>( + 1) ∈ Ψ
Ψ =  1 ∩  2 ∩ · · · ∩  
Each subband filter set   is defined in low sampling time  as follows:</p>
      <p>= { ∈ R : |, () −  ()| &lt;  }</p>
      <p>As with the full-band SM-FNLMS algorithm, we can converge the adaptive filter inside the set Ψ.
However, here,  conditions are imposed on the subband errors. The recursive updating equation is
given by:
() = ( − 1) −

=1
∑︁   ()  () () ()
  () =
{︃1
0,</p>
      <p>− |()| , if | ()| &gt; 
,1() =  ,1( − 1) +  () ( − 1)
 () =   ( −
¯ () =  () +  () ( − 1)
1) + ¯2 ()</p>
      <p>]︃
︂[  ()]︂
 ()
=
[︃</p>
      <p>− ¯()
 (− 1)+0
 ( − 1)
  () =  () ( −  ) +  (− 1)+0</p>
      <p>()¯()
Filtering:
  () = 1+ (− 1)+ ()</p>
      <p>(− 1)
if | ()| &gt;  then
 () =  () − ( − 1)  ()
  () = 1</p>
      <p>− |()|
 () =  ( −</p>
      <p>1) −   ()  () () ()
 () =  ( − 1)
else
end if</p>
    </sec>
    <sec id="sec-5">
      <title>5. Simulation</title>
      <p>In this section, we evaluate the performance of the proposed SB-SM-FNLMS algorithm in terms of
convergence rate (MSE learning curve) and computational complexity (CC). The input signal is a colored
autoregressive signal generated by an autoregressive model:
() = − 0.8650 ( − 1) − 0.8066 ( − 2) − 0.7703 ( − 3) + ()
(24)</p>
      <p>Here, v(n) represents Gaussian white noise with variance  2 , adjusted to ensure  2 = 1. The desired
signal is derived from filtering the input signal using an impulse response of 512 samples, as illustrated
in Figure 3. To simulate various perturbations, white noise with a variance of  2 = 0.01 is added
to the desired signal. The parameters of each algorithm are adjusted for optimal convergence rates,
with step sizes set to 1. For set-membership algorithms (i.e. SM-NLMS, SM-FNLMS, and the proposed
SB-SM-FNLMS), the error bound is defined as  = √︁5 2 = 0.223. This simulation aims to evaluate the
behavior of the proposed algorithms in addressing the impulse response identification problem. The
learning curve simulation involves calculating the segmental mean square error (Seg MSE), as described
by the relation below:</p>
      <p>Where B is the block length, and in this simulation, B is set to 400 samples. To generate the subband
input signals x_j (k) and subband desired signals d_j (k), we use analysis and synthesis FIR filters with
32 taps. The frequency responses of these filters for two subbands decompositions (N=2 and N=4) is
shown in Figure 4.</p>
      <sec id="sec-5-1">
        <title>5.1. Learning curve</title>
        <p>In this experiment, we evaluate the performance of the proposed SB-SM-FNLMS algorithm in comparison
with the NLMS, SM-NLMS, FNLMS, and SM-FNLMS algorithms using the SegMSE criterion. We
consider 2, 3, and 4 subband decompositions with critical decimation for our proposed algorithm.
The obtained results are shown in Figure 5. Based on this figure, we observe a higher convergence
rate with the proposed algorithm compared to all other algorithms, especially for higher numbers of
subband decompositions (N), which is due to the decorrelation introduced by subband decomposition.
Additionally, a lower steady-state MSE is achieved by the proposed algorithm, which is due to the
minimization of subband power errors, leading to a lower fullband power error.
5.2. Computational complexity
0 presents the computational complexity (CC) in terms of the number of multiplications and divisions
required for one iteration of the four algorithms (i.e., NLMS, SM-NLMS, FNLMS, SM-FNLMS, and the
proposed SB-SM-FNLMS algorithms). With critical decimation (D=N), the proposed algorithm requires
2M+12N+2 multiplication and 1+4N divisions per iteration, and for  ≫  , the CC of the proposed
algorithm is close to that of the SM-FNLMS algorithm. However, since the adaptive filter is updated
based on the low time-rate k, the overall time CC of the proposed algorithm can be significantly lower
than that of the SM-FNLMS. Figure 6 presents the ON/OFF updating filter at each time instant for our
simulation. We set N=4, and the updating filter based on the proposed algorithm operates on four
subbands with a low time rate.</p>
        <p>As shown in Fig 6, the update frequency obtained by the proposed algorithm is less dense compared
to the other algorithms. The total number of updates obtained for the learning curve in Figure 5 is
provided in Table 2. Based on this Table, we observe that, the number of updates is approximately
one-fifth that of the SM-FNLMS and SM-NLMS algorithms. This result demonstrates the superior
performance of the proposed algorithm in terms of computational complexity (CC).</p>
        <p>Algorithm
NLMS
SM-NLMS
FNLMS
SM-FNLMS
Proposed SB-SM-FNLMS</p>
        <p>Multiplications
3M + 1
3M + 1
2M + 14
2M + 14
(2+M)N/D+MN/D+12N</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion</title>
      <p>We have introduced in this work a new subband set-membership-based Fast Normalized Least Mean
Square (SB-SM-FNLMS) algorithm. By incorporating subband filtering, the proposed algorithm improves
both convergence rate and computational complexity (CC). Simulation results demonstrate its superior
performances compared to the existing NLMS, SM-NLMS, FNLMS, and SM-FNLMS algorithms in term
of convergence speed rate, final mean square error (MSE) and computational complexity (CC) , making
it well-suited for practical adaptive filtering applications, including acoustic echo cancellation, adaptive
noise reduction, etc.
During the preparation of this work, the author used ChatGPT, Grammarly in order to: Grammar and
spelling check, Paraphrase and reword. After using this tool, the author reviewed and edited the content
as needed and takes full responsibility for the publication’s content.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S. C.</given-names>
            <surname>Douglas</surname>
          </string-name>
          ,
          <article-title>Adaptive filters employing partial updates</article-title>
          ,
          <source>IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing</source>
          <volume>44</volume>
          (
          <year>1997</year>
          )
          <fpage>209</fpage>
          -
          <lpage>216</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>S.</given-names>
            <surname>Werner</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. L. De Campos</surname>
            ,
            <given-names>P. S.</given-names>
          </string-name>
          <string-name>
            <surname>Diniz</surname>
          </string-name>
          ,
          <article-title>Partial-update nlms algorithms with data-selective updating</article-title>
          ,
          <source>IEEE Transactions on Signal Processing</source>
          <volume>52</volume>
          (
          <year>2004</year>
          )
          <fpage>938</fpage>
          -
          <lpage>949</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Gollamudi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Nagaraj</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kapoor</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.-F.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <article-title>Set-membership filtering and a set-membership normalized lms algorithm with an adaptive step size</article-title>
          ,
          <source>IEEE Signal Processing Letters</source>
          <volume>5</volume>
          (
          <year>1998</year>
          )
          <fpage>111</fpage>
          -
          <lpage>114</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>I.</given-names>
            <surname>Hassani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Arezki</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Benallal</surname>
          </string-name>
          ,
          <article-title>Set-membership fast-nlms algorithm for acoustic echo cancellation</article-title>
          , in: 2018 International Conference on Signal,
          <article-title>Image, Vision and their Applications (SIVA)</article-title>
          , IEEE,
          <year>2018</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M. S. E.</given-names>
            <surname>Abadi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Moradiani</surname>
          </string-name>
          ,
          <article-title>A unified approach to tracking performance analysis of the selective partial update adaptive filter algorithms in nonstationary environment</article-title>
          ,
          <source>Digital Signal Processing</source>
          <volume>23</volume>
          (
          <year>2013</year>
          )
          <fpage>817</fpage>
          -
          <lpage>830</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Chefi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Djendi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Guessoum</surname>
          </string-name>
          ,
          <article-title>New eficient two channel forward set-membership partialupdate nlms algorithms for blind speech enhancement and acoustic noise reduction</article-title>
          ,
          <source>Applied Acoustics</source>
          <volume>141</volume>
          (
          <year>2018</year>
          )
          <fpage>322</fpage>
          -
          <lpage>332</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.</given-names>
            <surname>Benallal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Arezki</surname>
          </string-name>
          ,
          <article-title>A fast convergence normalized least-mean-square type algorithm for adaptive ifltering</article-title>
          ,
          <source>International Journal of Adaptive Control and Signal Processing</source>
          <volume>28</volume>
          (
          <year>2014</year>
          )
          <fpage>1073</fpage>
          -
          <lpage>1080</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>K.-A.</given-names>
            <surname>Lee</surname>
          </string-name>
          , W.-S. Gan,
          <article-title>Improving convergence of the nlms algorithm using constrained subband updates</article-title>
          ,
          <source>IEEE signal processing letters</source>
          <volume>11</volume>
          (
          <year>2004</year>
          )
          <fpage>736</fpage>
          -
          <lpage>739</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>