[Filter] Normalized LMS adaptive filter

1. NLMS adaptive filter

1 Introduction

In the adaptive filtering algorithm, steady-state error and convergence speed are two of the most important performance indicators. For the traditional fixed-step adaptive filter, there is a great contradiction in satisfying the steady-state error and the convergence speed: a smaller step size can obtain a small steady-state error, but the convergence speed is relatively slow; on the other hand, If the step size is increased, the convergence speed of the filter is improved, but at the expense of a larger steady-state error.
For this reason, a variable step size adaptive filtering algorithm is proposed, that is, the normalized least mean square (Normalized Least Mean Square, NLMS) adaptive filtering algorithm . The filtering algorithm has a larger step size in the initial stage of filtering, and the convergence speed is faster; when the algorithm converges, the step size is reduced to ensure a higher convergence accuracy.

Do you have a question here? The NLMS algorithm is only normalized on the basis of the LMS algorithm. Why is it an LMS algorithm with variable step size? This question will be explained at the end of the algorithm formula derivation, please read it patiently.

2. Principle derivation

In the weight vector update formula of the LMS filter, X ( n ) X(n)X ( n ) is a noisy signal, whenX ( n ) X(n)When X ( n ) is large, the LMS algorithm will have the problem of gradient noise amplification. In order to solve this problem, the normalized LMS algorithm can be used.
LetW ( n ) W(n)W ( n ) sumW ( n + 1 ) W(n+1)W(n+1 ) Respectively represent thennthn times andn + 1 n+1n+1 , the design criterion of the normalized LMS filter can be expressed as the following constrained optimization problem:
Given an input vectorX ( n ) X(n)X ( n ) and desired outputd ( n ) d(n)d ( n ) , determine the updated weight vector asW ( n + 1 ) W(n+1)W(n+1 ) so that the incremental vectorδ W ( n + 1 ) = W ( n + 1 ) − W ( n ) δW(n+1)=W(n+1)-W(n)δW(n+1)=W(n+1)The Euclidean norm of W ( n )
is minimized, that is , min ∣ ∣ δ W ( n + 1 ) ∣ ∣ min||δW(n+1)||min∣∣δW(n+1 ) ∣∣The
above formula is limited by the constraintsW ( n + 1 ) H ⋅ X ( n ) = d ( n ) W(n+1)^H·X(n)=d(n)W(n+1)HX(n)=d ( n ) .
Convert the constrained optimization problem to an unconstrained optimization problem using the Lagrange multiplier method, and construct a real-valued quadratic cost function:
J ( n ) = ∣ ∣ δ W ( n + 1 ) ∣ ∣ 2 + R e [ λ ∗ ( d ( n ) − W ( n + 1 ) H ⋅ X ( n ) ) ] J(n)=||δW(n+1)||²+Re[λ^*(d(n)-W (n+1)^H X(n))]J ( n )=∣∣δW(n+1)2+R e [ l(d(n)W(n+1)HX ( n ))]
, whereλ λλ is a complex Lagrangian multiplier,∗ * means complex conjugate,R e ReR e represents the real part operation.
Further, there is:
J ( n ) = [ W ( n + 1 ) − W ( n ) ] H ⋅ [ W ( n + 1 ) − W ( n ) ] + λ [ d ( n ) − W ( n + 1 ) H ⋅ X ( n ) ] J(n)=[W(n+1)-W(n)]^H·[W(n+1)-W(n)]+λ[d(n)- W(n+1)^H X(n)]J ( n )=[W(n+1)W(n)]H[W(n+1)W(n)]+λ [ d ( n )W(n+1)HX ( n )]
is such that the cost functionJ ( n ) J(n)J ( n ) has a minimum value forW ( n + 1 ) W(n+1)W(n+1 )最作求密:
∂ J ( n ) ∂ W ( n + 1 ) = 2 [ W ( n + 1 ) − W ( n ) ] − λ ⋅ X ( n ) \frac{∂J(n)}{ ∂W(n+1)}=2[W(n+1)-W(n)]-λ·X(n)W(n+1)J(n)=2[W(n+1)W(n)]λX ( n )
∂ J ( n ) ∂ W ( n + 1 ) = 0 \frac{∂J(n)}{∂W(n+1)}=0W(n+1)J(n)=0,则W ( n + 1 ) = W ( n ) + 1 2 λ ⋅ X ( n ) W(n+1)=W(n)+\frac{1}{2}λ·X(n)W(n+1)=W(n)+21λX ( n ), 共代代入美国安全可得:
d ( n ) = [ W ( n ) + 1 2 λ ⋅ X ( n ) ] H ⋅ X ( n ) = WH ( n ) ⋅ X ( n ) + 1 2 λ ⋅ XH ( n ) ⋅ X ( n ) = WH ( n ) ⋅ X ( n ) + 1 2 λ ⋅ ∣ ∣ X ( n ) ∣ ∣ 2 d(n)=[W(n)+\frac {1}{2}λ·X(n)]^H·X(n)=W^H(n·X(n)+\frac{1}{2}λ·X^H(n); X(n)=W^H(n·X(n)+\frac{1}{2}λ·||X(n)||²d(n)=[W(n)+21λX(n)]HX(n)=WH(n)X(n)+21λXH(n)X(n)=WH(n)X(n)+21λ∣∣X(n)2
又生d ( n ) − WH ( n ) ⋅ X ( n ) = e ( n ) d(n)-W^H(n)·X(n)=e(n)d(n)WH(n)X(n)=e ( n ) , substitute it into the above formula to get:
λ = 2 e ( n ) ∣ ∣ X ( n ) ∣ ∣ 2 λ=\frac{2e(n)}{||X(n)||²}l=∣∣X(n)22 e ( n )
Combining the above formulas, we can get:
δ W ( n + 1 ) = W ( n + 1 ) − W ( n ) = 1 2 λ ⋅ X ( n ) = X ( n ) ∣ ∣ X ( n ) ∣ ∣ 2 ⋅ e ( n ) δW(n+1)=W(n+1)-W(n)=\frac{1}{2}λ·X(n)=\frac{X(n)} {||X(n)||²}·e(n)δW(n+1)=W(n+1)W(n)=21λX(n)=∣∣X(n)2X(n)e ( n )
In order to control the incremental change of the tap weight vector from one iteration to the next without changing the direction of the vector, a positive real scalar factor μμμ, 可得:
μ ⋅ δ W ( n + 1 ) = μ ⋅ [ W ( n + 1 ) − W ( n ) ] = μ ⋅ 1 2 λ ⋅ X ( n ) = μ ⋅ X ( n ) ∣ ∣ X ( n ) ∣ ∣ 2 ⋅ e ( n ) μ·δW(n+1)=μ·[W(n+1)-W(n)]=μ·\frac{1}{2}λ·X (n)=\frac{μ·X(n)}{||X(n)||²}·e(n)μδW(n+1)=μ[W(n+1)W(n)]=μ21λX(n)=∣∣X(n)2μX(n)e ( n )
finally obtains the cost functionJ ( n ) J(n)The optimal solution of J ( n )
is: W ( n + 1 ) = W ( n ) + μ ∣ ∣ X ( n ) ∣ ∣ 2 X ( n ) ⋅ e ( n ) W(n+1)=W( n)+\frac{μ}{||X(n)||²}X(n·e(n)W(n+1)=W(n)+∣∣X(n)2mX(n)e ( n )
The above formula is the weight vector update formula of the normalized LMS, which shows the product vectorX ( n ) ⋅ e ( n ) X(n)·e(n)X(n)e ( n ) is relative to the tap input vectorX ( n ) X(n)The squared Euclidean norm of X ( n ) is normalized.
In order to guaranteeμ ∣ ∣ X ( n ) ∣ ∣ 2 \frac{μ}{||X(n)||²}∣∣X(n)2mThe denominator of is not zero, let ∣ ∣ X ( n ) ∣ ∣ 2 ||X(n)||²∣∣X(n)2 plus a constantα ( 0 < α ≤ 1 ) α(0<α≤1)a ( 0<a1 ) .
_ +\frac{μ}{α+||X(n)||²}X(n)·e(n)W(n+1)=W(n)+a+∣∣X(n)2mX(n)e ( n )
If you understand that NLMS is a variable step size LMS algorithm:

μ α + ∣ ∣ X ( n ) ∣ ∣ 2 \frac{μ}{α+||X(n)||²}α + ∣∣ X ( n ) 2mAs a whole, the step size μ ′ = μ α + ∣ ∣ X ( n ) ∣ ∣ 2 μ'= \frac{μ}{α+||X(n)||²}m=α + ∣∣ X ( n ) 2m, when the weight vector is in the iterative update process, due to X ( n ) X(n)X ( n ) is changing, so the step sizeμ ′ μ'm will also be updated, that is, it is considered that during the working process of the NLMS filter, its step size will follow the input signalX ( n ) X(n)X ( n ) changes.

The calculation steps of NLMS adaptive filter are summarized as follows:
y ( n ) = WT ( n ) X ( n ) y(n)=W^{T}(n)X(n)and ( n )=WT(n)X(n) --------------------------------------------------------------- (1)

e ( n ) = d ( n ) − y ( n ) e(n)=d(n)-y(n) e(n)=d(n)y ( n ) ---------------------------------------------- ------------------- (2)

W ( n + 1 ) = W ( n ) + μ α + ∣ ∣ X ( n ) ∣ ∣ 2 X ( n ) ⋅ e ( n ) W(n+1)=W(n)+\frac{μ} {α+||X(n)||²}X(n)·e(n)W(n+1)=W(n)+α + ∣∣ X ( n ) 2mX(n)e(n) ------------------------------- (3)

2. MATLAB simulation experiment

1. Filter effect

When filter order L = 20 L=20L=20 , step factorμ = 0.05 μ=0.05m=When the value is 0.05 , there is no obvious difference between the filtering effect of the LMS filter and the NLMS filter on the input signal with a signal-to-noise ratio of 15dB.
If the step size is increased to more than 0.06, the convergence speed of the LMS filter will be significantly faster, but the steady-state error will rise sharply, and the performance of the filter will deteriorate; while the NLMS filter still maintains good filtering performance.
Figure 1 Comparison of filtering effects of two filters
Figure 2 Comparison of filter output errors

clc;
clear;
close all;

%% 产生仿真信号
fs = 1000;                  % 采样频率
t = (0:1/fs:1-1/fs);        % 时间
f = 10;                     % 信号频率
x = sin(2*pi*f*t+pi/3);     % 原始信号
y = awgn(x,15,'measured');  % 添加高斯白噪声后的信号

%% NLMS自适应滤波器
L = 20;     % 滤波器阶数
Mu = 0.005;   % μ的范围为01
xn = y;     % 输入信号
dn = x;     % 期望信号
Alpha = 0.2;

[yn1, W1, en1] = LMS(xn,dn,L,Mu);
[yn2, W2, en2] = NLMS(xn, dn, L, Mu, Alpha);

%% 画图
figure;
subplot(3,1,1);plot(t,xn);xlabel('时间/s');ylabel('幅值');title('滤波器的输入信号');
subplot(3,1,2);plot(t,yn1);xlabel('时间/s');ylabel('幅值');title('LMS滤波器的输出信号');
subplot(3,1,3);plot(t,yn2);xlabel('时间/s');ylabel('幅值');title('NLMS滤波器的输出信号');
figure;
subplot(2,1,1);plot(en1);title('LMS滤波器的误差信号收敛情况');
subplot(2,1,2);plot(en2);title('NLMS滤波器的误差信号收敛情况');
function [yn, W, en] = NLMS(xn, dn, M, mu, alpha)

% input:
%       xn: 输入信号,大小为 1 x n
%       dn: 期望信号
%       M:  滤波器的阶数
%       mu: 收敛因子(步长)
%       alpha: 矫正值
%  output:
%         yn:   输出信号
%         W:    输出权系数
%         en:   输出误差信号

    [m,n] = size(xn);
    if m>1  % 如果输入信号为一列,则进行转置
        xn = xn';
    end
    if m>1 && n>1
        fprintf('输入信号有误!请检查输入信号是否为一行序列');
    end
    
    itr = n;    % 迭代次数等于输入信号的长度
    en = zeros(1,itr);
    W  = zeros(M,itr);    % 初始化权值矩阵,每一列代表一次迭代

    % 求最优权系数
    for i = M:itr                   % 第i次迭代
        x = xn(i:-1:i-M+1);         % 滤波器M个抽头的输入
        y = x*W(:,i-1);             % 滤波器的输出
        en(i) = dn(i) - y;          % 第i次迭代的误差
        W(:,i) = W(:,i-1) + (mu * x' * en(i)) / (alpha + norm(x)^2); % 滤波器权值计算的迭代式
        % norm(x)^2等效为x*x'
    end
    
    % 求输出序列  
    yn = inf * ones(size(xn));      % 初值设置为无穷大,画图的时候yn的前M个值不会绘图
    for k = M:n
        x = xn(k:-1:k-M+1);  
        yn(k) = x*W(:,end);  % 最终输出结果
    end
end

2. Effect of different parameters on filter performance

(1) Step size μ μWhen the step size of the μ
NLMS filter is 0.01, 0.1, 1, and 1.5, the convergence of the error signal is shown in the figure below. It can be seen from the figure that the step sizeμ μμ affects the convergence speed and filtering accuracy, withμ μAs μ increases, the convergence speed becomes faster, but after convergence, the fluctuation amplitude near the optimal solution becomes larger, and the steady-state accuracy decreases accordingly;Decreasing the value of μ can guarantee the filtering accuracy, but the convergence time of the algorithm is greatly increased, which is not suitable for systems with high real-time requirements. Therefore, it is necessary to compromise between the convergence speed and the filtering accuracy according to the specific situation. The appropriate parameters are also related to the specific characteristics of the signal to be processed, and multiple attempts are required to obtain the optimal value.
The error of different step lengths
(2) Filter orderLLL
in NLMS filter step sizeμ = 0.1 μ=0.1m=Under the condition of 0.1 , the filter orderLLWhen L is 10, 20, or 50, the corresponding error signal convergence is shown in the figure below. It can be seen from the figure that as the order of the filter increases, the filtering accuracy of the signal is significantly improved, but in the output signal of the filter, the amount of missing data also increases.Comparison of waveforms with different filter orders

3. References

[1] Du Jianbang, He Jinyang, Zhuo Chao. Real-time filtering method of fiber optic gyroscope signal based on least mean square adaptive algorithm [J]. Chinese Journal of Inertial Technology, 2020, 28(06): 814-818+828. [2] Zhang
Hongmei , Han Wangang. A new variable step size LMS adaptive filtering algorithm research and its application [J]. Journal of Instrumentation, 2015, 36(08): 1822-1830. [3] He Zishu, Xia Wei, etc. Modern Digital
Signal Processing and its application[M]. Beijing: Tsinghua University Press, 2009: 156-157.

Guess you like

Origin blog.csdn.net/weixin_45317919/article/details/126294494