Motion Distortion Correction of Laser Point Cloud Using IMU

Motion Distortion Correction of Laser Point Cloud Using IMU

1. Principles of Thought

In the process of laser SLAM positioning and mapping, when the lidar moves slowly and does not have a relatively large rotation, the motion distortion of the point cloud can be ignored, and the pose of the radar can still be estimated by ICP registration during frame-to-frame matching. However, when the lidar motion range is relatively large, the motion distortion of the laser point cloud has to be considered, and the short-term pose estimation information provided by the IMU or encoder can be used to remove the motion distortion of the point cloud. Here I use the IMU to remove the motion distortion of the point cloud. The acceleration information integral output by the IMU is very divergent. If it is a worse IMU device, the acceleration information will not be used. Here, the angular velocity information of the IMU is considered. In the point cloud time of 1 frame, the angular velocity of all IMUs is integrated to obtain the rotation angle of the IMU, and then converted to the lidar, and the attitude quaternion spherical linear interpolation is performed on the attitude of the radar to convert the corresponding point cloud to the attitude of the final state to realize the removal of point cloud motion distortion.

2. Operation process

Equipment used:

  • Lidar: Velodyne VLP16, frequency 10HZ, mechanical rotation, 16 lines, in the vertical direction ( − 1 5 ∘ , 1 5 ∘ ) (-15^{\circ} , 15^{\circ} )(15,15 ), the angle between each line is2 ∘ 2^{\circ}2 , there are 1800 laser points on the horizontal plane, the average resolutionis 0. 2 ∘ 0.2^{\circ}0.2
  • IMU: D435i, the frequency is 200HZ.

specific process:

  1. Angular velocity integration for the IMU : The angular velocity integration refers to the pre-integration process of VINS-Mono, and the frequency of the lidar is much lower than the frequency of the IMU. In one cycle of a point cloud data (100ms), the IMU will generate about 200 ÷ 10 = 20 200{\div} 10 = 20200÷10=20个,根据公式:
    α t = ω t 0 + ω t 2 ⋅ d t d t = t − t 0 α = ∑ i = 0 n − 1 α i \alpha_{t} = \frac{\omega_{t_0} + \omega_{t}}{2}\cdot dt \newline dt = t - t_0 \\ \alpha = \sum_{i =0}^{n-1} \alpha_i at=2oht0+ohtdtdt=tt0a=i=0n1ai
    where α t \alpha_{t}atfor ttTime t is relative to the previous timet 0 t_0t0The angle change of ω t 0 , ω t \omega_{t_0} , \omega_{t}oht0,ohtare the angular velocity at the corresponding moment, nnn IMU data. In this way, the IMU rotation angle in one cycle can be obtained by continuously integrating, accumulating and iterating.

    Here, the calculation method of quaternion is adopted. Quaternion q = [ cos ( θ 2 ) sin ( θ 2 ) ⋅ r ] = [ wxyz ] q=\begin{bmatrix} cos(\frac{\theta}{2} ) \\ sin(\frac{\theta}{2})\cdot r \end{bmatrix} = \begin{bmatrix} w \\ x \ \y\\z\end{bmatrix}q=[cos(2i)sin(2i)r]=wxyz, for adjacent IMU data θ 2 \frac{\theta}{2}2iAll tend to 0, available equivalent infinitesimal
    lim ⁡ dog → 0 sin ( dog) dog = 1 lim ⁡ dog → 0 cos ( dog) = 1 \lim_{dog\to 0} \frac{sin(dog)}{dog} = 1 \\ \lim_{dog\to 0} cos(dog) = 1dog 0limdogs in ( dog ) _=1dog 0limc o s ( dog )=1
    则有:
    q t = q t 0 ⋅ q d t q d t = ( 1 , α x 2 , α y 2 , α z 2 ) q_t = q_{t_0}\cdot q_{dt} \\ q_{dt} = (1, \frac{\alpha_x}{2} , \frac{\alpha_y}{2}, \frac{\alpha_z}{2}) qt=qt0qdtqdt=(1,2ax,2ay,2az)
    whereqt q_tqtAttitude quaternion at the moment, qt 0 q_{t_0}qt0is the attitude quaternion at the previous moment, qdt q_{dt}qdtis the change value of quaternion in this time period, α x \alpha_xax, α y \alpha_yay, α z \alpha_zazis the corresponding XYZ three-axis angle change.

    Attach part of the code of ROS+Eigen

    std::vector<sensor_msgs::Imu::ConstPtr> imu_msg_vector;
    std::mutex mutex_lock;
    
    // IMU消息回调存储
    void imu_cb(const sensor_msgs::Imu::ConstPtr &imu_msg)
    {
          
          
        mutex_lock.lock();
        imu_msg_vector.push_back(imu_msg);
        mutex_lock.unlock();
    }
    
    // IMU角度积分
    Eigen::Quaterniond imu_q = Eigen::Quaterniond::Identity(); // R
    Eigen::Vector3d angular_velocity_1;
    angular_velocity_1 << imu_msg_v[0]->angular_velocity.x, imu_msg_v[0]->angular_velocity.y, imu_msg_v[0]->angular_velocity.z;
    double lastest_time = imu_msg_v[0]->header.stamp.toSec();
    for (int i = 1; i < imu_msg_v.size(); i++)
    {
          
          
        double t = imu_msg_v[i]->header.stamp.toSec();
        double dt = t - lastest_time;
        lastest_time = t;
        Eigen::Vector3d angular_velocity_2;
        angular_velocity_2 << imu_msg_v[i]->angular_velocity.x, imu_msg_v[i]->angular_velocity.y, imu_msg_v[i]->angular_velocity.z;
        Eigen::Vector3d aver_angular_vel = (angular_velocity_1 + angular_velocity_2) / 2.0;
        imu_q = imu_q * Eigen::Quaterniond(1, 0.5 * aver_angular_vel(0) * dt, 0.5 * aver_angular_vel(1) * dt, 0.5 * aver_angular_vel(2) * dt);
    }
    
  2. IMU->radar attitude transformation : The attitude angle change of the IMU measured above needs to be mapped to the lidar coordinate system. The implementation needs to know the external parameter transformation transformation matrix of the lidar and IMU (the translation vector is not used, but the rotation matrix must be), R lidar IMU R_{lidar}^{IMU}RlidarIMUIt is the rotation matrix from lidar to IMU (need to be calibrated in advance). During the movement of lidar and IMU, it can be considered as R lidar IMU R_{lidar}^{IMU}RlidarIMUUnchangeable. Related:
    R lidar 0 lidar 1 ⋅ R lidar IMU = R lidar IMU ⋅ RIMU 0 IMU 1 R_{lidar_0}^{lidar_1} \cdot R_{lidar}^{IMU} = R_{lidar}^{IMU} \cdot R_{IMU_0}^{IMU_1}Rlidar0lidar1RlidarIMU=RlidarIMURIMU0IMU1
    任何R lidar 0 lidar 1 R_{lidar_0}^{lidar_1}Rlidar0lidar1It is the attitude transformation from the beginning state to the end state of the radar, $R_{IMU_0}^{IMU_1} is the attitude transformation from the beginning state to the end state of the IMU (that is, the IMU attitude angle quaternion integrated above is the attitude transformation from the beginning state to the end state of the IMU (that is, the IMU attitude angle quaternion integrated aboveIt is the attitude transformation from the initial state to the final state of I M U ( that is, the I M U attitude angle quaternion q$ integrated above ). Multiply the formula by the inverse at the same time, we can get:
    R lidar 0 lidar 1 = R lidar IMU ⋅ RIMU 0 IMU 1 ⋅ R lidar IMU − 1 R_{lidar_0}^{lidar_1} = R_{lidar}^{IMU} \cdot R_{IMU_0}^{IMU_1} \cdot {R_{lidar}^{IMU }}^{-1}Rlidar0lidar1=RlidarIMURIMU0IMU1RlidarIMU1
    Eigen code:

    Eigen::Matrix3d lidar_imu_R << -1,     -1.22465e-16,            0,
              1.22465e-16,          -1,            0,
               0,            0,           1;     // lidar->IMU 外参数旋转矩阵
    Eigen::Matrix3d lidar_R;
    lidar_R = lidar_imu_R * (imu_q.toRotationMatrix()) * lidar_imu_R.inverse();
    

    The effect is as follows:

  3. Perform spherical linear interpolation conversion on the attitude quaternion of the radar : in one cycle (100ms), the radar rotates clockwise on the top view, and the horizontal angle is as follows:
    insert image description here

    In a horizontal plane of 360 degrees, divided into 1800 parts, first calculate the angle θ i \theta_i corresponding to each point cloudii:
    θ i = a r c c o s ( x i x i 2 + y i 2 ) , i f   y i < 0 θ i = 2 π − a r c c o s ( x i x i 2 + y i 2 ) , i f   y i > 0 \theta_i = arccos\left ( \frac{x_i}{\sqrt{x_i^2 + y_i^2}} \right ) ,if \ y_i < 0 \\ \theta_i = 2\pi - arccos\left ( \frac{x_i}{\sqrt{x_i^2 + y_i^2}} \right ) ,if \ y_i > 0 ii=arccos(xi2+yi2 xi),if yi<0ii=2 p.marccos(xi2+yi2 xi),ify i>0
    再进行插值,注意是将当前时刻的点云转换到末状态雷达姿态上:
    q s l e r p = S l e r p ( q 0 , q 1 ; t ) = q 0 ( q 0 − 1 q 1 ) t = q 1 ( q 1 − 1 q 0 ) 1 − t = ( q 0 q 1 − 1 ) 1 − t q 1 = ( q 1 q 0 − 1 ) t q 0 q_{slerp}={Slerp}\left(q_{0}, q_{1} ; t\right)=q_{0}\left(q_{0}^{-1} q_{1}\right)^{t}=q_{1}\left(q_{1}^{-1} q_{0}\right)^{1-t}=\left(q_{0} q_{1}^{-1}\right)^{1-t} q_{1}=\left(q_{1} q_{0}^{-1}\right)^{t} q_{0} qslerp=Slerp(q0,q1;t)=q0(q01q1)t=q1(q11q0)1t=(q0q11)1tq1=(q1q01)tq0
    where q 0 q_0q0is the starting quaternion, q 1 q_1q1is the final state quaternion, ttt is the step size of interpolation,t ∈ [ 0 , 1 ] t\in [0,1]t[0,1]

    The quaternion qslerp q_{slerp}qslerpConvert to rotation matrix R slerp R_{slerp}RslerpThe form can then use the coordinate transformation formula:
    P 1 = [ x 1 y 1 z 1 1 ] = T slerp ⋅ P 0 = [ R slerp 0 0 1 ] ⋅ [ x 0 y 0 z 0 1 ] P_1 = \begin{bmatrix} x_1 \\ y_1 \\ z_1 \\ 1 \end{bmatrix} = T_ {slerp} \cdot P_0 = \begin{bmatrix} R_{slerp} & \boldsymbol 0 \\ \boldsymbol 0 & 1 \end{bmatrix} \cdot \begin{bmatrix} x_0 \\ y_0 \\ z_0 \\ 1 \end{bmatrix}P1=x1y1z11=TslerpP0=[Rslerp001]x0y0z01
    where P 0 P_0P0is the original point cloud coordinates, P 1 P_1P1is the converted point cloud coordinates.

    Attach part of the Eigen code:

    pcl::PointCloud<pcl::PointXYZI> cloud_undistorted;
    pcl::copyPointCloud(cloud_in, cloud_undistorted);
    
    for (int i = 0; i < cloud_undistorted.points.size(); i++)
    {
          
          
        double horizon_angle; // 化为角度为单位
        if (cloud_rgb.points[i].y < 0)
        {
          
          
            double theta = acos(cloud_undistorted.points[i].x / sqrt(cloud_undistorted.points[i].x * cloud_undistorted.points[i].x + cloud_undistorted.points[i].y * cloud_undistorted.points[i].y));
            horizon_angle = theta * 180.0 / M_PI;
        }
        else
        {
          
          
            double theta = acos(cloud_undistorted.points[i].x / sqrt(cloud_undistorted.points[i].x * cloud_undistorted.points[i].x + cloud_undistorted.points[i].y * cloud_undistorted.points[i].y));
            horizon_angle = (2 * M_PI - theta) * 180 / M_PI;
        }
        int id = int(horizon_angle / 360.0 * 1800);
        // std::cout << "horizon_angle: " << horizon_angle << ", id: " << id << std::endl;
    
        Eigen::Quaterniond q_slerp = lidar_q0.slerp((1.0 - horizon_angle / 360.0), lidar_q);
        Eigen::Matrix3d R_slerp = q_slerp.toRotationMatrix();
        Eigen::Matrix4d T_lidar_slerp;
        T_lidar_slerp << R_slerp(0, 0), R_slerp(0, 1), R_slerp(0, 2), 0,
            R_slerp(1, 0), R_slerp(1, 1), R_slerp(1, 2), 0,
            R_slerp(2, 0), R_slerp(2, 1), R_slerp(2, 2), 0,
            0, 0, 0, 1;
        Eigen::Vector4d Pi, Pj;
        Pi << cloud_undistorted.points[i].x, cloud_undistorted.points[i].y, cloud_undistorted.points[i].z, 1;
        Pj = T_lidar_slerp.inverse() * Pi;
        cloud_undistorted.points[i].x = Pj(0);
        cloud_undistorted.points[i].y = Pj(1);
        cloud_undistorted.points[i].z = Pj(2);
    }
    
  4. Effect:

    Single-frame point cloud processing effect:
    insert image description here
    the white point cloud in the figure is the original data, and the green point cloud is the de-distorted point cloud effect. In this way, the actual effect cannot be seen, and then there is an example of building a map to illustrate.

    Outdoor corridor environment:
    insert image description here
    red is the direction of travel of the handheld lidar. Using A-LOAM for mapping, without de-distortion point cloud construction:
    insert image description here
    After de-distortion, point cloud construction is still as stable as an old dog in a large environment with large rotation, the effect is as follows:
    insert image description here

Guess you like

Origin blog.csdn.net/weixin_41681988/article/details/124378390