Adaptive Filter Principle - Least Mean Square Algorithm (LMS)

In 1959, Widrow and Hoff proposed the least mean square (Least Mean Square, LMS) algorithm. Based on Wiener filtering theory, LMS uses the algorithm of estimating the gradient vector by instantaneous value, and updates the adaptive filter weight by minimizing the energy of the error signal. value factor.

Design an N-order filter whose parameter is w(n), then the output of the filter is

 The desired output is d(n), then the error signal can be defined as:

Our goal is to minimize the error e(n), using the minimum mean square error (MMSE) criterion to minimize the objective function: J(w)

Calculate the derivative of the objective function J(w) with respect to w, let the derivative be 0:

 

 Then the update formula of the filter coefficient can be written as:

 μ in the above formula is the step factor. The larger the value of μ, the faster the algorithm converges, but the larger the steady-state error; the smaller the value of μ, the slower the algorithm converges, but the smaller the steady-state error. In order to ensure the steady-state convergence of the algorithm, the value of μ should be in the following range:

  • Advantages : simple algorithm, easy to implement, low algorithm complexity (LMS<RLS), can suppress side lobe effect
  • Disadvantages :
    • The convergence rate is slow (LMS<RLS), because the LMS filter coefficient update is point-by-point (every time a new x(n) and d(n), the filter coefficient is updated once), the gradient of each sampling point It is estimated that there will be errors for the real gradient, resulting in that each update of the filter coefficient will not be updated strictly in accordance with the real gradient direction, but will have a certain deviation
    • The tracking performance is poor, and as the filter order (step size parameter) increases, the stability of the system decreases
    • LMS requires that the input vector x(n) at different times is linearly independent - the independence assumption of LMS. If there is a correlation in the input signal, it will cause the gradient noise generated in the previous iteration to propagate to the next iteration, resulting in repeated propagation of errors, slower convergence and poorer tracking performance.

So, in theory, the LMS algorithm works best with white noise. In order to reduce the correlation of the input signal, there is a class of " decorrelation LMS " algorithm.

In real scenarios, the following two situations are often encountered:

1. We often cannot obtain the desired signal (clean signal) in advance. If we have already obtained a clean signal, why do we need to filter it? Therefore, when only the noisy signal is known: In this case, the noisy signal is used as the reference signal, and then the noisy signal is delayed, and the delayed signal is used as the input signal. The purpose of this is to assume that during the time period of the delay, the signal is still correlated, but the noise is not. To remove the noise component by decorrelation, the time of one delay needs to be set reasonably, such as adaptive line spectrum enhancer (ALE).

Second, we know the noisy signal and the noise, we can use the noisy signal as the input signal, and the noise as the reference signal.

The MATLAB code is as follows:

function [e, y, w] = myLMS(d, x, mu, M)
% Inputs:
% d  - 麦克风语音
% x  - 远端语音
% mu - 步长,0.05
% M  - 滤波器阶数,也称为抽头数
%
% Outputs:
% e - 输出误差,近端语音估计
% y  - 输出系数,远端回声估计
% w - 滤波器参数

    d_length = length(d);
    if (d_length <= M)  
        print('error: 信号长度小于滤波器阶数!');
        return; 
    end
    if (d_length ~= length(x))  
        print('error: 输入信号和参考信号长度不同!');
        return; 
    end

    xx = zeros(M,1);
    w1 = zeros(M,1);    % 滤波器权重
    y = zeros(d_length,1);  % 远端回声估计
    e = zeros(d_length,1);  % 近端语音估计

    % for n = 1:Ns
    %     xx = [xx(2:M);x(n)];    % 纵向拼接
    %     xx(2~40)+x(1)-->xx(2~40)+x(2)-->xx(2~40)+x(3)...
    %     y(n) = w1' * xx;        % 远端回声估计(40,1)'*(40,1)=1; (73113,1)
    %     e(n) = d(n) - y(n);     % 近端语音估计
    %     w1 = w1 + mu * e(n) * xx;   % (40,1)
    %     w(:,n) = w1;        % (40, 73113)
    % end
    % 和上面类似
    for n = M:d_length
        xx = x(n:-1:n-M+1);    % 纵向拼接  (40~1)-->(41~2)-->(42~3)....
        y(n) = w1' * xx;        % 远端回声估计 (40,1)'*(40,1)=1; (73113,1)
        e(n) = d(n) - y(n);     % 近端语音估计
        w1 = w1 + mu * e(n) * xx;   % (40,1)
        w(:,n) = w1;        % (40, 73113)
    end
end

Reference link:

https://www.cnblogs.com/LXP-Never/p/11773190.html

Guess you like

Origin blog.csdn.net/qq_42233059/article/details/131342161