Adaptive Filter Principle - Affine Projection Algorithm (AP)

Affine projection algorithm (Affine projection algorithm) is an adaptive algorithm whose computation is between LMS and RLS.

 where X(n) is the input vector matrix, w^{*}(n)which is the conjugate transpose of w(n)

 Let the actual output of the filter be the expected output d(n), then the prior error vector is expressed as:

 For simplicity, we set μ=1 and substitute y(n) into e(n), then:

We found that the update amount of filter coefficients \triangle W_{n}^{*}(\bigtriangleup W_{n})is determined by a system of linear equations consisting of L linear equations

Supplementary knowledge about linear equations:

For a system of linear equations:A_{m*n}X_{n*1}=B_{m*1}

Consider the case of (row/column) full rank, divided into the following three cases:

(1) If m=n, then: x=A^{-1}bhas a unique exact solution

(2) If m<n (the number of rows is less than the number of columns), that is, the number of equations is less than the number of unknowns. At this time, the equation system has infinitely many solutions. In order to obtain a unique solution, constraints must be added to require the norm of x to be the smallest. The solution obtained in this way is called the minimum norm solution, that is, the x=A^{H}(AA^{H})^{-1}bsolution with the minimum norm

(3) If m>n (the number of rows is greater than the number of columns), that is, the number of equations is greater than the number of unknowns. At this time, there is no exact solution to the equation system, only approximate solutions. We naturally hope to find a solution that minimizes the sum of squared errors on both sides of the equation system, that is, the least squares solution, that is, x=(A^{H}A)^{-1}A^{H}bhas the least squares solution

Using the results of the linear system of equations, revisit the formula:

Divided into two situations to discuss :

Case 1 : 1≤L<N, the system of equations is an underdetermined equation with a unique minimum norm solution

 At this point, the weight update formula becomes:

 If L=1, it degenerates into NLMS algorithm:

 In practice, it is enough to take L as 2 or 3.

Case 2 : L≥N, the system of equations is an overdetermined equation, and its only solution is the least squares solution:

At this point, the weight update formula becomes:

 

  • Advantages : Convergence speed and computational complexity are between LMS and RLS
% 参考:
% https://github.com/rohillarohan/System-Identification-for-Echo-Cancellation
% https://github.com/3arbouch/ActiveNoiseCancelling/blob/c29ccfcd869657a5f58e1ce7755fe08c3a195bd9/ANC/BookExamples/MATLAB/APLMS_AEC_mono.m
function [e,y,w] = myAP(d,x,mu,M,L,psi)
    % input: 
    % d   -- 麦克风信号
    % x   -- 远端语音
    % mu  -- 步长
    % M   -- 滤波器阶数 40
    % L   -- 列数 2
    % psi -- 1e-4
    %
    % Outputs:
    % e - 输出误差 the output error vector of size Ns
    % y  - 输出系数 output coefficients
    % w - 滤波器参数 filter parameters
    
    d_length = length(d);
    if (d_length <= M)  
        print('error: 信号长度小于滤波器阶数!');
        return; 
    end
    if (d_length ~= length(x))  
        print('error: 输入信号和参考信号长度不同!');
        return; 
    end
    
    XAF = zeros(M,L);
    w1 = zeros(M,1);  % 滤波器权重 (40, 1)
    y = zeros(d_length,1);  % 估计的近端语音
    e = zeros(d_length,1);  % 误差
    
    for m = M+L:d_length    % 采样点数
        for k = 1:L % 列数
            XAF(:,k) = x(m-k+1:-1:m-k+1-M+1);   % (40,2)
        end
%         y(m) = (XAF'*w)'*(XAF'*w);        % 不太确定是不是这样(40,2)'*(40,1)=(2,1)
        E = d(m:-1:m-L+1) -XAF'*w1;    % (2,1)-(2,1)
        w1 = w1 + mu*XAF*inv((XAF'*XAF + psi*eye(L)))*E;
        w(:,m) = w1;        % (40, 73113)
        e(m) = E(1)'*E(1);
    end
end

Reference link:

https://www.cnblogs.com/LXP-Never/p/11773190.html

Guess you like

Origin blog.csdn.net/qq_42233059/article/details/131344494