Advanced GNSS Algorithm (2) - Kalman Filter Single Point Positioning Algorithm Code Implementation

state prediction

The biggest difference between kalman filtering and the previously implemented least squares is that kalman filtering uses historical state information, that is, state prediction.

The BU(k) (control input) in the formula is not used in our localization model.

The transition matrix that transfers the state from time (k-1) to time (k) is actually the relationship between the position, velocity, and acceleration we learned in junior high school, but what we learned in junior high school is one-dimensional, and now it is under the framework of ecef three dimensional.

In addition to the predicted state quantity, we also need to know the variance covariance information of the predicted state quantity. After learning the error propagation law in the adjustment, the covariance update of the forecast is also easy to understand.

It is only necessary to add a process noise matrix Q to amplify the variance covariance information of the predicted state.

It is also easy to understand intuitively that any model has errors. In order to prevent the predicted state weight from being too large, the process noise is increased to reduce the influence of the predicted state in the subsequent weighted average.

The prediction of the state and its corresponding variance covariance is implemented in the following function

/* temporal update of position/velocity/acceleration */
static void udpos_spp(rtk_t *rtk, double tt)

What we implement is the PVA model, which is the position/velocity/acceleration model.

The input tt of the function is the time interval between two adjacent state transitions, and the calculation of the process noise mainly depends on tt. It is also intuitively easy to understand that the longer the extrapolation time, the larger the process noise setting should be.


measurement update

① Residual error calculation

The residual calculation of kalman filter is similar to that of least squares. In least squares, the correction value based on the approximate position is calculated. The same is true for kalman filter. Refer to formula (4), which is also an update amount relative to the predicted state. The difference is that the state quantity of the kalman filter algorithm has more speed and acceleration, although there is no observation value to update them.

The design matrix and residual matrix still use the least squares rescode function, but the location of the receiver is updated and renamed as

/* pseudorange residuals -----------------------------------------------------*/ 
static int rescode_mulfreq_filter(int iter, const obsd_t *obs, int n, const double *rs, const double *dts, const double *vare, const int *svh, const nav_t *nav, const double *x, const prcopt_t *opt, double *v, double *H, double *var, double *azel, int *vsat, double *resp, int *ns)

The function does not consider the velocity and acceleration state quantities, so after this function, a function that specifically inserts these two state quantities is added.

static void insert_vel_acc_colums(const int nv, double *H)

It was originally considered to share a function with the least squares, and it will be unified later if there is a chance.

② Filtering

The logic of the kalman filtering algorithm is exactly the same as that of the rtk algorithm. Directly reuse the filter function, call filter_ in the filter function, and do a step of processing before calling, that is, only the states with non-zero state values ​​and variance greater than 0 participate in the filtering. This is done because the default state quantity in rtk includes the ambiguity of all satellites, and the ambiguity without observation value update obviously does not need to participate in filtering. So when using this filter, be careful not to set the initial value of each state quantity to 0. Or call the filter_ function directly.

Update the filtering results back to rtk_t->x_spp and rtk_t->P_spp to facilitate filtering in the next epoch. Also store the filter into sol_t.


/* kalman filter ---------------------------------------------------------------
* kalman filter state update as follows:
*
*   K=P*H*(H'*P*H+R)^-1, xp=x+K*v, Pp=(I-K*H')*P
*
* args   : double *x        I   states vector (n x 1)
*          double *P        I   covariance matrix of states (n x n)
*          double *H        I   transpose of design matrix (n x m)
*          double *v        I   innovation (measurement - model) (m x 1)
*          double *R        I   covariance matrix of measurement error (m x m)
*          int    n,m       I   number of states and measurements
*          double *xp       O   states vector after update (n x 1)
*          double *Pp       O   covariance matrix of states after update (n x n)
* return : status (0:ok,<0:error)
* notes  : matirix stored by column-major order (fortran convention)
*          if state x[i]==0.0, not updates state x[i]/P[i+i*n]
*-----------------------------------------------------------------------------*/
static int filter_(const double *x, const double *P, const double *H,
                   const double *v, const double *R, int n, int m,
                   double *xp, double *Pp)

Result analysis

In order to prevent the sampling time from being too long, causing the state prediction weight to be too low and having no filtering effect, a data with a sampling frequency of 1s is deliberately selected. The data is downloaded from Hong Kong cors data. ( ftp://ftp.geodetic.gov.hk/rinex3 )

The relevant data configuration and code have been updated in the following two commits

55df872b4de9362e1166bb46ab9a6b2df131e8d0

7569205e5f6df464e10fa12a29f569e672cb7b44

As described at the end of the above tweet, the result of adding kalman filtering is not as smooth as imagined. The initial suspicion is that the accuracy of state prediction is too low.

Considering the static base station data, the speed is zero, so the constraint that the speed in the three directions is 0 is directly added to the code. The constraint code is as follows, which is very simple

        /* add velocity constrain of static mode */
        for (j = 3; j < 6; j++)
        {
            for (k = 0; k < NX_F; k++) {
                H[k + nv * NX_F] = 0.0;
            }
            H[j + nv * NX_F] = 1.0;
            v[nv] = 0.0 -x[j] ;
            var[nv] = SQR(0.001);
            nv++;
        }

The comparison before and after adding the zero-speed constraint is as follows, where the noisy result is the result of kalman filtering without the zero-speed constraint, and the smoothed result is the result of adding the zero-speed constraint.

The result of adding the zero-speed constraint is quite smooth, and the positioning accuracy is on the order of decimeters, which exceeds expectations. This also shows that there is no problem with our added kalman filtering code at a certain level.

No public

Sometimes codes or resources are placed on the personal official account. If you have any questions, you can reply in the background of the official account, and the answer is faster. Welcome to pay attention to GNSS and automatic driving

Summary of links to other related articles

GNSS Algorithm Learning Series Tutorial - Article List_Indus Fighting's Blog-CSDN Blog

Guess you like

Origin blog.csdn.net/dong20081991/article/details/127540363