<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">A New Subband Set-Membership Fast NLMS (SB-SM-FNLMS) Adaptive Algorithm</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Mohamed</forename><surname>Zerouali</surname></persName>
							<email>zerouali.med@yahoo.com</email>
							<affiliation key="aff0">
								<orgName type="institution">University of Blida</orgName>
								<address>
									<settlement>Blida</settlement>
									<country key="DZ">Algeria</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Mohamed</forename><surname>Djendi</surname></persName>
							<email>m_djendi@yahoo.fr</email>
							<affiliation key="aff0">
								<orgName type="institution">University of Blida</orgName>
								<address>
									<settlement>Blida</settlement>
									<country key="DZ">Algeria</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff1">
								<orgName type="department">International Conference on Informatics and Applied Mathematics IAM&apos;</orgName>
								<address>
									<addrLine>24, December 4-5</addrLine>
									<postCode>2024</postCode>
									<settlement>Guelma</settlement>
									<country key="DZ">Algeria</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">A New Subband Set-Membership Fast NLMS (SB-SM-FNLMS) Adaptive Algorithm</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">AA61F3481B21C2CF63ADBBF55FE36447</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T17:11+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>NLM</term>
					<term>FNLMS</term>
					<term>SM</term>
					<term>SB</term>
					<term>SM-FNLMS</term>
					<term>SB-SM-FNLMS</term>
					<term>SegMSE</term>
					<term>CC</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This study introduces a novel Subband Set-Membership Fast Normalized Least Mean Square (SB-SM-FNLMS) adaptive filtering algorithm. By integrating the subband adaptive filtering approach into the Set-Membership Fast Normalized Least Mean Square (SM-FNLMS) algorithm, the convergence rate, final mean square error (MSE) and computational complexity (CC) are improved. A performance comparison, based on learning curve (Mean Square Error (MSE) plot), between the proposed SB-SM-FNLMS algorithm and the existing Normalized Least Mean Square (NLMS), Set-Membership Normalized Least Mean Square (NLMS), Fast Normalized Least Mean Square (FNLMS), and Set-Membership Fast Normalized Least Mean Square (SM-FNLMS) algorithms, demonstrates the superior performances of the proposed algorithm.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>In modern communication systems, such as hands-free telephony and audio teleconferencing, adaptive filtering plays an important role, particularly in applications like acoustic echo cancellation (AEC) and noise reduction (NR). Adaptive filtering adjusts filter coefficients in real-time, making it highly effective in non-stationary environments. Several reduced-complexity adaptive algorithms have been proposed in the literature, including partial update techniques <ref type="bibr" target="#b0">[1]</ref>, where only a subset of filter taps is updated during each iteration <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2]</ref>. Set-membership algorithms have also been introduced as an alternative, utilizing specific time-update instances to reduce overall computational complexity (CC) <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b3">4]</ref>. A compromise between partial updating and set-membership NLMS algorithms has been proposed to further reduce CC <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b4">5,</ref><ref type="bibr" target="#b5">6]</ref>. Recently, a Set-Membership Fast NLMS <ref type="bibr" target="#b3">[4]</ref> (SM-FNLMS) algorithm was developed. The FNLMS algorithm <ref type="bibr" target="#b6">[7]</ref> uses the decorrelation properties to improve convergence speed by estimating the first forward predictor coefficient. When combined with the set-membership approach, this improves both computational complexity and convergence rate.</p><p>In this work, we develop a subband (SB) approach for the SM-FNLMS algorithm, where a setmembership adaptive filtering technique is applied in each subband. This approach offers two key advantages: subband filtering enhances the convergence rate, while incorporating set membership in each subband reduces the frequency updating which leads to reduce the computational complexity compared to the original SM-FNLMS algorithm <ref type="bibr" target="#b3">[4]</ref>. The performance of the proposed algorithm is evaluated based on mean square error (MSE) and the overall computational complexity required in simulation time. The structure of this paper is as follows: Section 1 discusses the adaptive filtering problem. Section 2 introduces the NLMS, FNLMS <ref type="bibr" target="#b6">[7]</ref>, and SM-FNLMS <ref type="bibr" target="#b3">[4]</ref> algorithms. In Section 3, we present the derivation of the proposed subband SM-FNLMS (SB-SM-FNLMS) algorithm. Simulation results, in terms of MSE and computational complexity, are provided in Section 4. Finally, Section 5 concludes the paper.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Adaptive Filtering</head><p>The principle of adaptive filtering is illustrated in Figure <ref type="figure" target="#fig_0">1</ref> It involves processing an input signal x(n) to generate, at each time instant, an output signal y(n) such that the difference between the desired response d(n) and the estimated response y(n) is minimized. This minimization is achieved by updating the coefficients (weights) of the adaptive filter w at time n, using the latest data set, which includes the desired signal d(n), the input signal x(n), and the a priori filtering error defined as follows:</p><formula xml:id="formula_0">𝑒(𝑛) = 𝑑(𝑛) − w 𝑇 (𝑛 − 1)x(𝑛)<label>(1)</label></formula><p>Here the input signal vector x(n) represents the M last simples of the input signal at instant n, and the filter vector w(n) represents the M adjusted coefficients at time instant n. These two vectors are defined as follows:</p><formula xml:id="formula_1">x(𝑛) = [︀ 𝑥(𝑛) 𝑥(𝑛 − 1) . . . 𝑥(𝑛 − 𝑀 − 1) ]︀ 𝑇<label>(2)</label></formula><formula xml:id="formula_2">w(𝑛) = [︀ 𝑤 0 (𝑛) 𝑤 1 (𝑛) . . . 𝑤 𝑀 −1 (𝑛) ]︀ 𝑇<label>(3)</label></formula><p>In most common cases, the desired signal d(n), is correlated with the input signal x(n), as it is obtained using a linear transformation of the input signal (e.g., in cases of acoustic echo cancellation, adaptive filter identification, and adaptive noise cancellation). The adaptive algorithm iteratively minimizes the mean square error E[e(n)]at each time step, using the previous estimate of w(n-1) and the new correction term G(n)e(n).</p><formula xml:id="formula_3">w(𝑛 + 1) = w(𝑛) + G(𝑛)𝑒(𝑛)<label>(4)</label></formula><p>Where G(n) represents the adaptation gain. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Adaptive filtering algorithms</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">NLMS Algorithm</head><p>The adaptation gain G(n) can be computed in different ways by various algorithms. In the normalized least mean squares (NLMS) algorithm, it is calculated by minimizing the mean squared error (MSE) and is defined as:</p><formula xml:id="formula_4">G(𝑛) = 𝜇 NLMS x(𝑛)/x 𝑇 (𝑛)x(𝑛)<label>(5)</label></formula><p>Here, 𝜇_NLMS serves as the step size parameter, controlling the convergence behavior of the NLMS algorithm and is bounded by 0&lt;𝜇_NLMS&lt;2 . As a result, the adaptive filter's weights are updated through the recursive equation:</p><formula xml:id="formula_5">w(𝑛 + 1) = w(𝑛) + 𝜇 NLMS x(𝑛)𝑒(𝑛)/x 𝑇 (𝑛)x(𝑛)<label>(6)</label></formula><p>The NLMS algorithm has a computational complexity of 3M+1 multiplications and 1 division per iteration, making it feasible for implementation in real-time applications with limited computational resources. Algorithm 1 provides a summary of the NLMS algorithm.</p><formula xml:id="formula_6">Algorithm 1 NLMS Algorithm 𝑒(𝑛) = 𝑑(𝑛) − w 𝑇 (𝑛 − 1)x(𝑛) w(𝑛) = w(𝑛 − 1) + 𝜇 NLMS x 𝑇 (𝑛)x(𝑛)+𝛿 NLMS x(𝑛)𝑒(𝑛)</formula><p>Where 𝛿 NLMS is a small constant to avoid division by zero.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">FNLMS Algorithm</head><p>The Fast Recursive Least Squares (FRLS) Algorithm is an efficient alternative computational method to the Recursive Least Squares (RLS) algorithm. The FRLS algorithm minimizes the sum of squared errors with an exponential forgetting factor 𝜆 by calculating the dual Kalman gain k(n) of the RLS algorithm, introducing forward and backward predictor vectors a(n) and b(n),. In the FRLS algorithm, the adaptation vector is defined as follows:</p><formula xml:id="formula_7">𝐺(𝑛) = 𝛾(𝑛)𝑘(𝑛)<label>(7)</label></formula><p>where 𝛾(𝑛) is called the likelihood factor. This algorithm provides a high convergence speed rate compared to the NLMS algorithm. However, the CC of this algorithm remains high in comparison with that of NLMS.</p><p>A more recent approach, the fast-convergence NLMS (FNLMS) algorithm <ref type="bibr" target="#b6">[7]</ref>, further reduces computational complexity by simplifying the adaptation gain of the FRLS algorithm, achieving a computational cost similar to that of the NLMS algorithm while maintaining a convergence rate close to that of the FRLS algorithm. In the FNLMS algorithm, the adaptation gain is computed using the following recursive equation:</p><formula xml:id="formula_8">[︂ 𝑘(𝑛) 𝑘(𝑛) ]︂ = [︃ −𝑒 ¯(𝑛) 𝜆𝑝(𝑛−1)+𝑐 0 𝑘(𝑛 − 1) ]︃<label>(8)</label></formula><p>𝑐 0 is a small constant included to prevent division by zero. The forward prediction error 𝑒 ¯(𝑛) and its variance p(n) are calculated using the following equations:</p><formula xml:id="formula_9">𝑒 ¯(𝑛) = 𝑥(𝑛) + 𝑎(𝑛)𝑥(𝑛 − 1) (9) 𝑝(𝑛) = 𝜆𝑝(𝑛 − 1) + 𝑒 ¯2(𝑛)<label>(10)</label></formula><p>The predictor 𝑎(𝑛) is estimated by using the auto-correlation coefficients 𝑟 0 (𝑛) and 𝑟 1 (𝑛):</p><formula xml:id="formula_10">𝑎(𝑛) = 𝑟 1 (𝑛)/𝑟 0 (𝑛)<label>(11)</label></formula><p>these autocorrelation coefficients are recursively updated as follows:</p><formula xml:id="formula_11">𝑟 0 (𝑛) = 𝜆 𝑎 𝑟 0 (𝑛 − 1) + 𝑥(𝑛)𝑥(𝑛) (12) 𝑟 1 (𝑛) = 𝜆 𝑎 𝑟 1 (𝑛 − 1) + 𝑥(𝑛)𝑥(𝑛 − 1)<label>(13)</label></formula><p>Where 𝜆 𝑎 is the forgetting factor. The conversion factor 𝛾(𝑛) is updated using the following recursive equation:</p><formula xml:id="formula_12">𝛾(𝑛) = 𝛾(𝑛 − 1) 1 + 𝛾(𝑛 − 1) + 𝛽(𝑛)<label>(14)</label></formula><p>Where 𝛽(𝑛) is computed as follows:</p><formula xml:id="formula_13">𝛽(𝑛) = 𝑘(𝑛)𝑥(𝑛 − 𝑀 ) + 𝑥(𝑛)𝑒 ¯(𝑛) 1 + 𝜆𝑝(𝑛 − 1) + 𝑐 0 (15)</formula><p>The NLMS algorithm is summarized in algorithm 2:</p><formula xml:id="formula_14">Algorithm 2 FNLMS algorithm 𝑘(0) = 0, 𝛾(0) = 1, 𝑟 1 (0) = 0, 𝑟 0 (0) = 1, 𝑝(0) = 1 0.9 &lt; 𝜆 &lt; 1, 0.9 &lt; 𝜆 𝑎 &lt; 1, 𝑐 𝑎 = 𝑐 0 (small constant) 𝑟 0 (𝑛) = 𝜆 𝑎 𝑟 0 (𝑛 − 1) + 𝑥(𝑛)𝑥(𝑛) 𝑟 1 (𝑛) = 𝜆 𝑎 𝑟 1 (𝑛 − 1) + 𝑥(𝑛)𝑥(𝑛 − 1) 𝑎(𝑛) = 𝑟 1 (𝑛) 𝑟 0 (𝑛)+𝑐𝑎 𝑒 ¯(𝑛) = 𝑥(𝑛) + 𝑎(𝑛)𝑥(𝑛 − 1) 𝑝(𝑛) = 𝜆𝑝(𝑛 − 1) + 𝑒 ¯2(𝑛) [︂ 𝑘(𝑛) 𝑘(𝑛) ]︂ = [︃ −𝑒 ¯(𝑛) 𝜆𝑝(𝑛−1)+𝑐 0 𝑘(𝑛 − 1) ]︃ 𝛽(𝑛) = 𝑘(𝑛)𝑥(𝑛 − 𝑀 ) + 𝑥(𝑛)𝑒 ¯(𝑛) 1+𝜆𝑝(𝑛−1)+𝑐 0 𝛾(𝑛) = 𝛾(𝑛−1) 1+𝛾(𝑛−1)+𝛽(𝑛)</formula><p>Filtering: 𝑒(𝑛) = 𝑑(𝑛) − w 𝑇 (𝑛 − 1)𝑥(𝑛) w(𝑛) = w(𝑛 − 1) − 𝜇 FNLMS 𝛾(𝑛)𝑘(𝑛)𝑒(𝑛)</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Set-Membership FNLMS (SM-FNLMS) algorithm</head><p>The main strategy of the SM-FNLMS algorithm is to conduct a verification step that checks whether the prior estimate vector w(𝑛 − 1) falls outside the constraint set Ψ. This set includes all vectors w for which the corresponding output error 𝑒(𝑛) at time 𝑛 remains within a specified upper limit, denoted by 𝜁:</p><formula xml:id="formula_15">Ψ = {︀ w ∈ R 𝑀 : |𝑑(𝑛) − w 𝑇 x(𝑛)| &lt; 𝜁 }︀<label>(16)</label></formula><p>A recursive algorithm with a priori error 𝑒(𝑛) testing can be used to converge the filter w(𝑛) to the set of filter solutions defined by equation ( <ref type="formula" target="#formula_15">16</ref>). The recursive updating equation is given below:</p><formula xml:id="formula_16">w(𝑛) = w(𝑛 − 1) − 𝜇(𝑛)𝛾(𝑛)𝑘(𝑛)𝑒(𝑛)<label>(17)</label></formula><p>Where:</p><formula xml:id="formula_17">𝜇(𝑛) = {︃ 1 − 𝜁 |𝑒(𝑛)| if |𝑒(𝑛)| &gt; 𝜁 0 otherwise<label>(18)</label></formula><p>Clearly, when |𝑒(𝑛)| ≤ 𝜁, the step-size value will be 𝜇(𝑛) = 0, and consequently, w(𝑛) = w(𝑛 − 1), resulting in no update of the filter. This provides a benefit in terms of the computational complexity of the overall update time. Additionally, the SM-FNLMS algorithm employs a variable step-size, which can achieve good convergence with an optimal MSE steady-state. The SM-FNLMS algorithm is summarized in algorithm 3.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Algorithm 3 SM-FNLMS Algorithm</head><p>𝑘(0) = 0, 𝛾(0) = 1, 𝑟 1 (0) = 0, 𝑟 0 (0) = 1, 𝑝(0) = 1 0.9 &lt; 𝜆 &lt; 1, 0.9 &lt; 𝜆 𝑎 &lt; 1, 𝑐 𝑎 = 𝑐 0 (small constant)</p><formula xml:id="formula_18">𝑟 0 (𝑛) = 𝜆 𝑎 𝑟 0 (𝑛 − 1) + 𝑥(𝑛)𝑥(𝑛) 𝑟 1 (𝑛) = 𝜆 𝑎 𝑟 1 (𝑛 − 1) + 𝑥(𝑛)𝑥(𝑛 − 1) 𝑎(𝑛) = 𝑟 1 (𝑛) 𝑟 0 (𝑛)+𝑐𝑎 𝑒 ¯(𝑛) = 𝑥(𝑛) + 𝑎(𝑛)𝑥(𝑛 − 1) 𝑝(𝑛) = 𝜆𝑝(𝑛 − 1) + 𝑒 ¯2(𝑛) [︂ 𝑘(𝑛) 𝑘(𝑛) ]︂ = [︃ −𝑒 ¯(𝑛) 𝜆𝑝(𝑛−1)+𝑐 0 𝑘(𝑛 − 1) ]︃ 𝛽(𝑛) = 𝑘(𝑛)𝑥(𝑛 − 𝑀 ) + 𝑥(𝑛)𝑒 ¯(𝑛) 1+𝜆𝑝(𝑛−1)+𝑐 0 𝛾(𝑛) = 𝛾(𝑛−1) 1+𝛾(𝑛−1)+𝛽(𝑛) Filtering: 𝑒(𝑛) = 𝑑(𝑛) − w 𝑇 (𝑛 − 1)𝑥(𝑛) if |𝑒(𝑛)| &gt; 𝜁 then 𝜇(𝑛) = 1 − 𝜁 |𝑒(𝑛)| w(𝑛) = w(𝑛 − 1) − 𝜇(𝑛)𝛾(𝑛)𝑘(𝑛)𝑒(𝑛) else w(𝑛) = w(𝑛 − 1) end if</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Proposed Algorithm</head><p>Consider the critically subband adaptive filtering (𝐷 = 𝑁 ) given in Figure 11 <ref type="bibr" target="#b7">[8]</ref>. Before filtering, the input signals 𝑥(𝑛) and 𝑑(𝑛) are analyzed using the analysis filters 𝐹 𝑗 to generate subband signals 𝑥 𝑖 (𝑛) and 𝑑 𝑖 (𝑛). The desired signals 𝑑 𝑖 (𝑛) and the output signals of the filter 𝑦 𝑖 (𝑛) are then decimated with a factor 𝐷, to provide low-time-rate signals 𝑑 𝑖 (𝑘) and 𝑦 𝑖 (𝑘), producing low-time-rate subband errors 𝑒 𝑗 (𝑘).</p><p>Note that the subband and decimated signals are referred to by the indices 𝑗 and 𝐷. In our algorithm, in each subband, we calculate the prediction parameters using the corresponding input subband signal 𝑥 𝑖 (𝑛), following the same strategy as the fullband FNLMS.</p><p>The proposed algorithm is based on subband set-membership, where the adaptive filters 𝑤(𝑘 + 1) belong to the set of filters Ψ:</p><formula xml:id="formula_19">𝑤(𝑘 + 1) ∈ Ψ (19)</formula><p>The proposed algorithm uses subband adaptive updating, so we define the set of filters Ψ as the intersection of the sets of subband filters 𝜓 𝑗 , as follows:</p><formula xml:id="formula_20">Ψ = 𝜓 1 ∩ 𝜓 2 ∩ • • • ∩ 𝜓 𝑁 (<label>20</label></formula><formula xml:id="formula_21">)</formula><p>Each subband filter set 𝜓 𝑗 is defined in low sampling time 𝑘 as follows:</p><formula xml:id="formula_22">𝜓 𝑗 = {𝑤 ∈ R 𝑀 : |𝑑 𝐷,𝑗 (𝑘) − 𝑥 𝑇 𝑗 (𝑘)𝑤| &lt; 𝜁} (21)</formula><p>As with the full-band SM-FNLMS algorithm, we can converge the adaptive filter inside the set Ψ. However, here, 𝑁 conditions are imposed on the subband errors. The recursive updating equation is given by:</p><formula xml:id="formula_23">𝑤(𝑘) = 𝑤(𝑘 − 1) − 𝑁 ∑︁ 𝑗=1 𝜇 𝑗 (𝑘)𝛾 𝑗 (𝑘)𝑘 𝑗 (𝑘)𝑒 𝑗 (𝑘)<label>(22)</label></formula><p>where: </p><formula xml:id="formula_24">𝜇 𝑗 (𝑘) = {︃ 1 − 𝜁 |𝑒 𝑗 (𝑘)| , if |𝑒 𝑗 (𝑘)| &gt; 𝜁 0, otherwise<label>(23)</label></formula><formula xml:id="formula_25">𝑘 𝑗 (0) = 0, 𝛾 𝑗 (0) = 1, 𝑟 𝑗,1 (0) = 0, 𝑟 𝑗,0 (0) = 1, 𝑝 𝑗 (0) = 1 0.9 &lt; 𝜆 &lt; 1, 0.9 &lt; 𝜆 𝑎 &lt; 1, 𝑐 𝑎 = 𝑐 0 (small constant) 𝑟 𝑗,0 (𝑛) = 𝜆 𝑎 𝑟 𝑗,0 (𝑛 − 1) + 𝑥 𝑗 (𝑛)𝑥 𝑗 (𝑛) 𝑟 𝑗,1 (𝑛) = 𝜆 𝑎 𝑟 𝑗,1 (𝑛 − 1) + 𝑥 𝑗 (𝑛)𝑥 𝑗 (𝑛 − 1) 𝑎 𝑗 (𝑛) = 𝑟 𝑗,1 (𝑛) 𝑟 𝑗,0 (𝑛)+𝑐𝑎 𝑒 ¯𝑗(𝑛) = 𝑥 𝑗 (𝑛) + 𝑎 𝑗 (𝑛)𝑥 𝑗 (𝑛 − 1) 𝑝 𝑗 (𝑛) = 𝜆𝑝 𝑗 (𝑛 − 1) + 𝑒 ¯2 𝑗 (𝑛) [︂ 𝑘 𝑗 (𝑛) 𝑘 𝑗 (𝑛) ]︂ = [︃ −𝑒 ¯𝑗 (𝑛) 𝜆𝑝 𝑗 (𝑛−1)+𝑐 0 𝑘 𝑗 (𝑛 − 1) ]︃ 𝛽 𝑗 (𝑛) = 𝑘 𝑗 (𝑛)𝑥 𝑗 (𝑛 − 𝑀 ) + 𝑥 𝑗 (𝑛)𝑒 ¯𝑗 (𝑛) 𝜆𝑝 𝑗 (𝑛−1)+𝑐 0 𝛾 𝑗 (𝑛) = 𝛾 𝑗 (𝑛−1) 1+𝛾 𝑗 (𝑛−1)+𝛽 𝑗 (𝑛) Filtering: 𝑒 𝑗 (𝑘) = 𝑑 𝑗 (𝑘) − 𝑤(𝑘 − 1) 𝑇 𝑥 𝑗 (𝑘) if |𝑒 𝑗 (𝑘)| &gt; 𝜁 then 𝜇 𝑗 (𝑘) = 1 − 𝜁 |𝑒 𝑗 (𝑘)| 𝑤 𝑗 (𝑘) = 𝑤 𝑗 (𝑘 − 1) − 𝜇 𝑗 (𝑘)𝛾 𝑗 (𝑘)𝑘 𝑗 (𝑘)𝑒 𝑗 (𝑘) else 𝑤 𝑗 (𝑘) = 𝑤 𝑗 (𝑘 − 1) end if</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Simulation</head><p>In this section, we evaluate the performance of the proposed SB-SM-FNLMS algorithm in terms of convergence rate (MSE learning curve) and computational complexity (CC). The input signal is a colored autoregressive signal generated by an autoregressive model:</p><formula xml:id="formula_26">𝑥(𝑛) = −0.8650 𝑥(𝑛 − 1) − 0.8066 𝑥(𝑛 − 2) − 0.7703 𝑥(𝑛 − 3) + 𝑣(𝑛)<label>(24)</label></formula><p>Here, v(n) represents Gaussian white noise with variance 𝜎 2 𝑣 , adjusted to ensure 𝜎 2 𝑥 = 1. The desired signal is derived from filtering the input signal using an impulse response of 512 samples, as illustrated in Figure <ref type="figure" target="#fig_2">3</ref>. To simulate various perturbations, white noise with a variance of 𝜎 2 𝜂 = 0.01 is added to the desired signal. The parameters of each algorithm are adjusted for optimal convergence rates, with step sizes set to 1. For set-membership algorithms (i.e. SM-NLMS, SM-FNLMS, and the proposed SB-SM-FNLMS), the error bound is defined as 𝜁 = √︁ 5𝜎 2 𝜂 = 0.223. This simulation aims to evaluate the behavior of the proposed algorithms in addressing the impulse response identification problem. The learning curve simulation involves calculating the segmental mean square error (Seg MSE), as described by the relation below:</p><formula xml:id="formula_27">SegMSE dB = 𝐵−1 ∑︁ 𝑖=0 10 log 10 [︀ 𝑒 2 (𝑛) ]︀ (<label>25</label></formula><formula xml:id="formula_28">)</formula><p>Where B is the block length, and in this simulation, B is set to 400 samples. To generate the subband input signals x_j (k) and subband desired signals d_j (k), we use analysis and synthesis FIR filters with 32 taps. The frequency responses of these filters for two subbands decompositions (N=2 and N=4) is shown in Figure <ref type="figure" target="#fig_3">4</ref>.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1.">Learning curve</head><p>In this experiment, we evaluate the performance of the proposed SB-SM-FNLMS algorithm in comparison with the NLMS, SM-NLMS, FNLMS, and SM-FNLMS algorithms using the SegMSE criterion. We consider 2, 3, and 4 subband decompositions with critical decimation for our proposed algorithm. The obtained results are shown in Figure <ref type="figure" target="#fig_4">5</ref>. Based on this figure, we observe a higher convergence rate with the proposed algorithm compared to all other algorithms, especially for higher numbers of subband decompositions (N), which is due to the decorrelation introduced by subband decomposition. Additionally, a lower steady-state MSE is achieved by the proposed algorithm, which is due to the minimization of subband power errors, leading to a lower fullband power error. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2.">Computational complexity</head><p>0 presents the computational complexity (CC) in terms of the number of multiplications and divisions required for one iteration of the four algorithms (i.e., NLMS, SM-NLMS, FNLMS, SM-FNLMS, and the proposed SB-SM-FNLMS algorithms). With critical decimation (D=N), the proposed algorithm requires 2M+12N+2 multiplication and 1+4N divisions per iteration, and for 𝑀 ≫ 𝑁 , the CC of the proposed algorithm is close to that of the SM-FNLMS algorithm. However, since the adaptive filter is updated based on the low time-rate k, the overall time CC of the proposed algorithm can be significantly lower than that of the SM-FNLMS. Figure <ref type="figure" target="#fig_5">6</ref> presents the ON/OFF updating filter at each time instant for our simulation. We set N=4, and the updating filter based on the proposed algorithm operates on four subbands with a low time rate.</p><p>As shown in Fig <ref type="figure" target="#fig_5">6</ref>, the update frequency obtained by the proposed algorithm is less dense compared to the other algorithms. The total number of updates obtained for the learning curve in Figure <ref type="figure" target="#fig_4">5</ref> is provided in Table <ref type="table" target="#tab_0">2</ref>. Based on this Table, we observe that, the number of updates is approximately one-fifth that of the SM-FNLMS and SM-NLMS algorithms. This result demonstrates the superior performance of the proposed algorithm in terms of computational complexity (CC).  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Conclusion</head><p>We have introduced in this work a new subband set-membership-based Fast Normalized Least Mean Square (SB-SM-FNLMS) algorithm. By incorporating subband filtering, the proposed algorithm improves both convergence rate and computational complexity (CC). Simulation results demonstrate its superior performances compared to the existing NLMS, SM-NLMS, FNLMS, and SM-FNLMS algorithms in term of convergence speed rate, final mean square error (MSE) and computational complexity (CC) , making it well-suited for practical adaptive filtering applications, including acoustic echo cancellation, adaptive noise reduction, etc.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Adaptive filtering principle</figDesc><graphic coords="2,161.01,427.74,270.78,152.00" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Subband adaptive filtering</figDesc><graphic coords="6,138.45,142.80,315.90,144.59" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Impulse Response</figDesc><graphic coords="7,138.45,309.35,315.89,176.13" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Frequency responses of analyses and synthesists FIR filters of 32 samples</figDesc><graphic coords="7,138.45,536.85,315.91,125.08" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: The learning curve is defined with 𝜆 𝑎 = 𝜆 = 0.94, 𝑐 0 = 𝑐 𝑎 = 0.01, 𝜎 2 𝜂 = 0.01, and 𝜁 = √︁ 5𝜎 2 𝜂 = 0.223. The step size of all algorithms is fixed to 1.</figDesc><graphic coords="8,138.45,158.84,315.91,176.43" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 6 :</head><label>6</label><figDesc>Figure 6: Obtained frequency updating for each set-membership algorithm</figDesc><graphic coords="9,138.45,184.24,315.90,372.17" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 2</head><label>2</label><figDesc>Total number of updates</figDesc><table><row><cell>Algorithm</cell><cell>Multiplications</cell><cell>Divisions</cell></row><row><cell>NLMS</cell><cell>3M + 1</cell><cell>1</cell></row><row><cell>SM-NLMS</cell><cell>3M + 1</cell><cell>2</cell></row><row><cell>FNLMS</cell><cell>2M + 14</cell><cell>4</cell></row><row><cell>SM-FNLMS</cell><cell>2M + 14</cell><cell>5</cell></row><row><cell>Proposed SB-SM-FNLMS</cell><cell>(2+M)N/D+MN/D+12N</cell><cell>N/D+4N</cell></row><row><cell>Table 1</cell><cell></cell><cell></cell></row><row><cell>Computational complexity per iteration</cell><cell></cell><cell></cell></row></table></figure>
		</body>
		<back>
			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Declaration on Generative AI</head><p>During the preparation of this work, the author used ChatGPT, Grammarly in order to: Grammar and spelling check, Paraphrase and reword. After using this tool, the author reviewed and edited the content as needed and takes full responsibility for the publication's content.</p></div>			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Adaptive filters employing partial updates</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">C</forename><surname>Douglas</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing</title>
		<imprint>
			<biblScope unit="volume">44</biblScope>
			<biblScope unit="page" from="209" to="216" />
			<date type="published" when="1997">1997</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Partial-update nlms algorithms with data-selective updating</title>
		<author>
			<persName><forename type="first">S</forename><surname>Werner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">L</forename><surname>De Campos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">S</forename><surname>Diniz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Signal Processing</title>
		<imprint>
			<biblScope unit="volume">52</biblScope>
			<biblScope unit="page" from="938" to="949" />
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Set-membership filtering and a set-membership normalized lms algorithm with an adaptive step size</title>
		<author>
			<persName><forename type="first">S</forename><surname>Gollamudi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Nagaraj</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kapoor</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y.-F</forename><surname>Huang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Signal Processing Letters</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="page" from="111" to="114" />
			<date type="published" when="1998">1998</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Set-membership fast-nlms algorithm for acoustic echo cancellation</title>
		<author>
			<persName><forename type="first">I</forename><surname>Hassani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Arezki</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Benallal</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2018 International Conference on Signal, Image, Vision and their Applications (SIVA)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="1" to="5" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">A unified approach to tracking performance analysis of the selective partial update adaptive filter algorithms in nonstationary environment</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">S E</forename><surname>Abadi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Moradiani</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Digital Signal Processing</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="page" from="817" to="830" />
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">New efficient two channel forward set-membership partialupdate nlms algorithms for blind speech enhancement and acoustic noise reduction</title>
		<author>
			<persName><forename type="first">A</forename><surname>Cheffi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Djendi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Guessoum</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Applied Acoustics</title>
		<imprint>
			<biblScope unit="volume">141</biblScope>
			<biblScope unit="page" from="322" to="332" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">A fast convergence normalized least-mean-square type algorithm for adaptive filtering</title>
		<author>
			<persName><forename type="first">A</forename><surname>Benallal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Arezki</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Adaptive Control and Signal Processing</title>
		<imprint>
			<biblScope unit="volume">28</biblScope>
			<biblScope unit="page" from="1073" to="1080" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Improving convergence of the nlms algorithm using constrained subband updates</title>
		<author>
			<persName><forename type="first">K.-A</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W.-S</forename><surname>Gan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE signal processing letters</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page" from="736" to="739" />
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
