OpenCV的Sample分析:相机标定(4)

OpenCV的Sample分析:相机标定(4)

今天来分析runCalibration() 这个函数

    bool ok = runCalibration(s, imageSize, cameraMatrix, distCoeffs, imagePoints, rvecs, tvecs, reprojErrs,
                             totalAvgErr);

这个函数的原型是这样的,

static bool runCalibration( Settings& s, Size& imageSize, Mat& cameraMatrix, Mat& distCoeffs,
                            vector<vector<Point2f> > imagePoints, vector<Mat>& rvecs, vector<Mat>& tvecs,
                            vector<float>& reprojErrs,  double& totalAvgErr)

明白这些变量的含义后,再来看程序,

    //! [fixed_aspect]
    cameraMatrix = Mat::eye(3, 3, CV_64F);
    if( s.flag & CALIB_FIX_ASPECT_RATIO )
        cameraMatrix.at<double>(0,0) = s.aspectRatio;
    //! [fixed_aspect]
    if (s.useFisheye) {
        distCoeffs = Mat::zeros(4, 1, CV_64F);
    } else {
        distCoeffs = Mat::zeros(8, 1, CV_64F);
    }

cameraMatrix被初始化为3×3的单位矩阵。flag CALIB_FIX_ASPECT_RATIO指焦距fu和fv的比值是否固定,如果是固定的,焦距比的信息可以从in_VID5.xml文件中读到。与此同时,畸变参数也初始化了。

接下来,

    vector<vector<Point3f> > objectPoints(1);
    calcBoardCornerPositions(s.boardSize, s.squareSize, objectPoints[0], s.calibrationPattern);

    objectPoints.resize(imagePoints.size(),objectPoints[0]);

这段代码中,calcBoardCornerPositions()函数的意义是什么呢?

static void calcBoardCornerPositions(Size boardSize, float squareSize, vector<Point3f>& corners,
                                     Settings::Pattern patternType /*= Settings::CHESSBOARD*/)
{
    corners.clear();

    switch(patternType)
    {
    case Settings::CHESSBOARD:
    case Settings::CIRCLES_GRID:
        for( int i = 0; i < boardSize.height; ++i )
            for( int j = 0; j < boardSize.width; ++j )
                corners.push_back(Point3f(j*squareSize, i*squareSize, 0));
        break;

    case Settings::ASYMMETRIC_CIRCLES_GRID:
        for( int i = 0; i < boardSize.height; i++ )
            for( int j = 0; j < boardSize.width; j++ )
                corners.push_back(Point3f((2*j + i % 2)*squareSize, i*squareSize, 0));
        break;
    default:
        break;
    }
}

这个函数的作用是,根据棋盘的大小boardsize,以及棋盘中一个方格的大小squaresize,确定棋盘格上角点的三维坐标,把角点的三维坐标存放在vector<Point3f>型的corner变量中。

vector<vector<Point3f> > objectPoints(1) 这条语句设定了一个数组,数组的元素只有一个

也许会对这个赋值语句感到奇怪,这条语句我也不太理解,但是也比较重要(不好意思)

objectPoints.resize(imagePoints.size(),objectPoints[0]);

然后进行标定操作,

    //Find intrinsic and extrinsic camera parameters
    double rms;

    if (s.useFisheye) {
        Mat _rvecs, _tvecs;
        rms = fisheye::calibrate(objectPoints, imagePoints, imageSize, cameraMatrix, distCoeffs, _rvecs,
                                 _tvecs, s.flag);

        rvecs.reserve(_rvecs.rows);
        tvecs.reserve(_tvecs.rows);
        for(int i = 0; i < int(objectPoints.size()); i++){
            rvecs.push_back(_rvecs.row(i));
            tvecs.push_back(_tvecs.row(i));
        }
    } else {
        rms = calibrateCamera(objectPoints, imagePoints, imageSize, cameraMatrix, distCoeffs, rvecs, tvecs,
                              s.flag);
    }

    cout << "Re-projection error reported by calibrateCamera: "<< rms << endl; 

这里不太关心calibrateCamera()的具体实现,会调用就行。opencv document给出了这个函数的说明点击打开链接。

总之calibrateCamera()的结果是,求出相机的内参数矩阵camera Matrix,畸变系数distCoeffs,相机在每一帧的的旋转矩阵rvecs以及位移向量tvecs。

最后再分析这个函数的最后三行代码,checkRange比较好理解,就是检查标定结果有没有问题。opencv doc上是这样解释的,

The function cv::checkRange checks that every array element is neither NaN nor infinite. When minVal > -DBL_MAX and maxVal < DBL_MAX, the function also checks that each value is between minVal and maxVal. In case of multi-channel arrays, each channel is processed independently. If some values are out of range, position of the first outlier is stored in pos (when pos != NULL). Then, the function either returns false (when quiet=true) or throws an exception.

    bool ok = checkRange(cameraMatrix) && checkRange(distCoeffs);

    totalAvgErr = computeReprojectionErrors(objectPoints, imagePoints, rvecs, tvecs, cameraMatrix,
                                            distCoeffs, reprojErrs, s.useFisheye);

    return ok;

再来看一下函数 computeReprojectionErrors()的作用。这个函数比较容易理解,就是计算每一帧的各个像素点的重投影误差,要注意projectPoints()函数的使用。

static double computeReprojectionErrors( const vector<vector<Point3f> >& objectPoints,
                                         const vector<vector<Point2f> >& imagePoints,
                                         const vector<Mat>& rvecs, const vector<Mat>& tvecs,
                                         const Mat& cameraMatrix , const Mat& distCoeffs,
                                         vector<float>& perViewErrors, bool fisheye)
{
    vector<Point2f> imagePoints2;
    size_t totalPoints = 0;
    double totalErr = 0, err;
    perViewErrors.resize(objectPoints.size());

    for(size_t i = 0; i < objectPoints.size(); ++i )
    {
        if (fisheye)
        {
            fisheye::projectPoints(objectPoints[i], imagePoints2, rvecs[i], tvecs[i], cameraMatrix,
                                   distCoeffs);
        }
        else
        {
            projectPoints(objectPoints[i], rvecs[i], tvecs[i], cameraMatrix, distCoeffs, imagePoints2);
        }
        err = norm(imagePoints[i], imagePoints2, NORM_L2);

        size_t n = objectPoints[i].size();
        perViewErrors[i] = (float) std::sqrt(err*err/n);
        totalErr        += err*err;
        totalPoints     += n;
    }

    return std::sqrt(totalErr/totalPoints);
}










猜你喜欢

转载自blog.csdn.net/qq_39732684/article/details/80372067