Obstacle recognition algorithm development-(3) Parallel binocular stereo vision algorithm

1.Project background

Use two cameras with the same model and lens to develop binocular vision algorithms to measure the distance of objects at long distances

2. Development environment and equipment

VS2019+opencv454, matlab2021a for binocular camera calibration, 2 camera lenses of the same model, guaranteed to be triggered at the same time

3. Detailed design plan

(1) Binocular equipment installation requirements and calculation principles The
parallel binocular vision measurement method is used. Two cameras are installed in parallel with an installation distance of 1 meter.
The imaging principle is shown in the figure below:
Insert image description here
establish a spatial rectangular coordinate system, construct a similar triangle, and calculate the parallax:
Insert image description here
Figure 5-2 The principle of binocular vision ranging
is shown in Figure 5-2. From the similar triangle principle, it can be seen that PL1L2 is similar to PC1C2, establishing The equation solution formula is shown in 5-1:
Insert image description here
After simplification:
Insert image description here
where B is the distance between the two cameras, called the baseline distance, f is the focal length of the camera, which represents the parallax, represented by d. In Formula 5-2, the baseline distance B and focal length f are known, and the object distance can be obtained by simply requiring the parallax d.

(2) Use matlab for binocular camera calibration
Insert image description here
1) Before calibration, the images of the two cameras need to be numbered to ensure one-to-one correspondence.
2) The number of calibrated images is greater than 40, about 20 for each of the left and right cameras.
3) The movement range of the calibration plate covers the field of view that needs to be measured as much as possible.
4) The calibration plate should not be too small. If the calibration plate is too small, the calibration will be inaccurate and the projection error will be relatively large. When the accuracy is high, a larger calibration plate needs to be replaced.
(3) The calibration results are as follows, in which the average projection error is about 1.3 pixels.
Insert image description here
(4) Calibration verification:
Ensure that the y of the images is the same, which facilitates subsequent stereo matching and also checks whether the calibration results are accurate.
Insert image description here

Insert image description here
The results after stereo correction in the figure can basically match, and the error is acceptable. It needs to be migrated to the c++ project to save the internal parameters, distortion coefficients and common R and T matrices of the two cameras.
(5) Matlab saves the calibration results.
The following matlab code refers to a blogger:

rowName = cell(1,10);
rowName{
    
    1,1} = '平移矩阵';
rowName{
    
    1,2} = '旋转矩阵';
rowName{
    
    1,3} = '相机1内参矩阵';
rowName{
    
    1,4} = '相机1径向畸变';
rowName{
    
    1,5} = '相机1切向畸变';
rowName{
    
    1,6} = '相机2内参矩阵';
rowName{
    
    1,7} = '相机2径向畸变';
rowName{
    
    1,8} = '相机2切向畸变';
rowName{
    
    1,9} = '相机1畸变向量';
rowName{
    
    1,10} = '相机2畸变向量';
xlswrite('out.xlsx',rowName(1,1),1,'A1');
xlswrite('out.xlsx',rowName(1,2),1,'A2');
xlswrite('out.xlsx',rowName(1,3),1,'A5');
xlswrite('out.xlsx',rowName(1,4),1,'A8');
xlswrite('out.xlsx',rowName(1,5),1,'A9');
xlswrite('out.xlsx',rowName(1,6),1,'A10');
xlswrite('out.xlsx',rowName(1,7),1,'A13');
xlswrite('out.xlsx',rowName(1,8),1,'A14');
xlswrite('out.xlsx',rowName(1,9),1,'A15');
xlswrite('out.xlsx',rowName(1,10),1,'A16');
xlswrite('out.xlsx',stereoParams.TranslationOfCamera2,1,'B1');  % 平移矩阵
xlswrite('out.xlsx',stereoParams.RotationOfCamera2.',1,'B2');  % 旋转矩阵
xlswrite('out.xlsx',stereoParams.CameraParameters1.IntrinsicMatrix.',1,'B5');  % 相机1内参矩阵
xlswrite('out.xlsx',stereoParams.CameraParameters1.RadialDistortion,1,'B8');  % 相机1径向畸变(1,2,5)
xlswrite('out.xlsx',stereoParams.CameraParameters1.TangentialDistortion,1,'B9');  % 相机1切向畸变(3,4)
xlswrite('out.xlsx',stereoParams.CameraParameters2.IntrinsicMatrix.',1,'B10');  % 相机2内参矩阵
xlswrite('out.xlsx',stereoParams.CameraParameters2.RadialDistortion,1,'B13');  % 相机2径向畸变(1,2,5)
xlswrite('out.xlsx',stereoParams.CameraParameters2.TangentialDistortion,1,'B14');  % 相机2切向畸变(3,4)
xlswrite('out.xlsx',[stereoParams.CameraParameters1.RadialDistortion(1:2), stereoParams.CameraParameters1.TangentialDistortion,...
    stereoParams.CameraParameters1.RadialDistortion(3)],1,'B15');  % 相机1畸变向量
xlswrite('out.xlsx',[stereoParams.CameraParameters2.RadialDistortion(1:2), stereoParams.CameraParameters2.TangentialDistortion,...
    stereoParams.CameraParameters2.RadialDistortion(3)],1,'B16');  % 相机2畸变向量

The last two lines of distortion may report an error, mainly due to the difference in the check box during calibration. If you choose the first one, only 2 will be output. The third parameter is not available, so you need to delete that line in matlab.
Insert image description here
The parameters have been transposed in the saved out.xls result and can be used directly in C++.

(6) Engineering application
Stereo matching and stereo correction:
Insert image description here
During actual use, there are problems such as non-parallel installation angles, so stereo correction is required.

//2.双目的立体标定和立体校正
void ObstacleRecongnition::stereo_check(Mat grayImageL, Mat grayImageR, Mat &rectifyImageL, Mat &rectifyImageR, Size imageSize, Mat &Q, Rect &validROIL, Rect &validROIR) {
    
    
	Mat mapLx, mapLy, mapRx, mapRy;     //映射表  
	Mat Rl, Rr, Pl, Pr;              //校正旋转矩阵R,投影矩阵P 重投影矩阵Q
	//左边相机的内参矩阵
	Mat cameraMatrixL = (Mat_<double>(3, 3) <<
		left_fx, 0, left_cx,
		0, left_fy,left_cy,
		0, 0, 1);
	//左边相机的畸变矩阵
	Mat distCoeffL = (Mat_<double>(5, 1) << left_k1,left_k2,left_p1,left_p2,left_k3);
	//右边相机的内参矩阵
	Mat cameraMatrixR = (Mat_<double>(3, 3) <<
		right_fx, 0, right_cx,
		0, right_fy, right_cy,
		0, 0, 1);
	//右边相机的畸变矩阵
	Mat distCoeffR = (Mat_<double>(5, 1) << right_k1,right_k2,right_p1,right_p2,right_k3);

	//两个相机标定后的平移矩阵
	Mat T = (Mat_<double>(3, 1) << double_py_x1,double_py_x2,double_py_x3);//T平移向量

	//两个相机标定后的旋转矩阵
	Mat rec = (Mat_<double>(3, 3) <<
		double_xz_x1, double_xz_x2, double_xz_x3,
		double_xz_x4, double_xz_x5, double_xz_x6,
		double_xz_x7, double_xz_x8, double_xz_x9);                //rec旋转向量

	Mat R;//R 旋转矩阵
	Rodrigues(rec, R); //Rodrigues变换,获得3*3的旋转矩阵

	//立体校正
	stereoRectify(cameraMatrixL, distCoeffL, cameraMatrixR, distCoeffR, imageSize, R, T, Rl, Rr, Pl, Pr, Q, CALIB_ZERO_DISPARITY,
		0, imageSize, &validROIL, &validROIR);
	//计算无畸变和修正转换关系
	initUndistortRectifyMap(cameraMatrixL, distCoeffL, Rl, Pr, imageSize, CV_32FC1, mapLx, mapLy);//mapL——输出的X坐标重映射参数
	initUndistortRectifyMap(cameraMatrixR, distCoeffR, Rr, Pr, imageSize, CV_32FC1, mapRx, mapRy);//mapR——输出的Y坐标重映射参数

	//一幅图像中某位置的像素放置到另一个图片指定位置。
	//经过remap之后,左右相机的图像已经共面并且行对准了
	remap(grayImageL, rectifyImageL, mapLx, mapLy, INTER_LINEAR);//最后一个参数是差值方式
	remap(grayImageR, rectifyImageR, mapRx, mapRy, INTER_LINEAR);
	//输出的图像需和原图形一样的尺寸和类型

#if TEST_LOG
	//把校正结果显示出来
	Mat rgbRectifyImageL, rgbRectifyImageR;
	cv::cvtColor(rectifyImageL, rgbRectifyImageL, CV_GRAY2BGR);  //伪彩色图
	cv::cvtColor(rectifyImageR, rgbRectifyImageR, CV_GRAY2BGR);
	Mat canvas;
	double sf;
	int w, h;
	sf = 600. / MAX(rectifyImageL.cols, rectifyImageL.rows);
	w = cvRound(rectifyImageL.cols * sf);
	h = cvRound(rectifyImageL.rows * sf);
	canvas.create(h, w * 2, CV_8UC3);   //注意通道

	//左图像画到画布上
	Mat canvasPart = canvas(Rect(w * 0, 0, w, h));                                //得到画布的一部分  
	resize(rgbRectifyImageL, canvasPart, canvasPart.size(), 0, 0, INTER_AREA);     //把图像缩放到跟canvasPart一样大小 

	//右图像画到画布上
	canvasPart = canvas(Rect(w, 0, w, h));                                      //获得画布的另一部分  
	resize(rgbRectifyImageR, canvasPart, canvasPart.size(), 0, 0, INTER_LINEAR);

	//画上对应的线条
	for (int i = 0; i < canvas.rows; i += 16)
		line(canvas, Point(0, i), Point(canvas.cols, i), Scalar(0, 255, 0), 1, 8);

	imwrite("D:\\桌面文件\\save\\Rectifyimages.jpg", canvas);
	waitKey(0);
#endif
}

Dual vision algorithm c++:

//双目视觉算法
void ObstacleRecongnition::binocular_vision(Mat rgbImageL, Mat rgbImageR, Mat &rectifyImageL1,Mat &rectifyImageR1) {
    
    
	Mat rectifyImageL,rectifyImageR;
	Mat grayImageL, grayImageR;
	const int imageWidth = rgbImageL.cols;                      
	const int imageHeight = rgbImageL.rows;
	Size imageSize = Size(imageWidth, imageHeight);
	
	//1.图像获取
	cv::cvtColor(rgbImageL, grayImageL, CV_BGR2GRAY);
	cv::cvtColor(rgbImageR, grayImageR, CV_BGR2GRAY);

	//2.立体校正--图像对齐了-在同一极线上检测左右摄像机图像的相同特征
	stereo_check(grayImageL, grayImageR, rectifyImageL, rectifyImageR, imageSize, Q, validROIL, validROIR);
	
	cv::cvtColor(rectifyImageL, rectifyImageL1, CV_GRAY2BGR);
	cv::cvtColor(rectifyImageR, rectifyImageR1, CV_GRAY2BGR);
}

4. Calculate distance through stereo matching algorithm

The core of binocular vision is to calculate distance through parallax: z = bf/d
where b represents the distance between two cameras, f represents the focal length of the camera, d represents parallax, d=xr-xl, which is the x-coordinate difference in the left and right images , the unit of d is pixel.

This is also the current mainstream algorithm. It uses stereo matching, then generates a disparity map, and finally calculates the distance. The code is very simple. It is basically packaged in opencv. You only need to modify a few parameters based on the image.
Disadvantages: slow speed, good disparity map is not easy to obtain.
accomplish:

void ObstacleRecongnition::stereo_match(Mat rectifyImageL, Mat rectifyImageR, Mat Q, Mat &xyz, Rect validROIL, Rect validROIR,Mat &SC_Map)
{
    
    
	Ptr<StereoSGBM> sgbm = StereoSGBM::create(0, StereoSGBM_width, StereoSGBM_height);
	sgbm->setPreFilterCap(PreFilterCap);
	
	sgbm->setBlockSize(sgbmWinSize);
	int cn = rectifyImageL.channels();
	sgbm->setP1(8 * cn*sgbmWinSize*sgbmWinSize);
	sgbm->setP2(32 * cn*sgbmWinSize*sgbmWinSize);
	sgbm->setMinDisparity(0);
	sgbm->setNumDisparities(NumDisparities);
	sgbm->setUniquenessRatio(UniquenessRatio);
	sgbm->setSpeckleWindowSize(SpeckleWindowSize);
	sgbm->setSpeckleRange(SpeckleRange);
	sgbm->setDisp12MaxDiff(Disp12MaxDiff);
	sgbm->setMode(StereoSGBM::MODE_SGBM);
	Mat disp,disp8;
	sgbm->compute(rectifyImageL, rectifyImageR, disp);
	//去黑边
	//Mat img1p, img2p;
	//copyMakeBorder(rectifyImageL, img1p, 0, 0, NumDisparities, 0, IPL_BORDER_REPLICATE);
	//copyMakeBorder(rectifyImageR, img2p, 0, 0, NumDisparities, 0, IPL_BORDER_REPLICATE);
	//dispf = disp.colRange(NumDisparities, img2p.cols - NumDisparities);

	//视差图滤波算法
	cv::imwrite("D:\\桌面文件\\save\\原始视差图.jpg", disp);

	disp.convertTo(disp8, CV_8U, 255 / (NumDisparities *16.));
	reprojectImageTo3D(disp, xyz, Q, true); //在实际求距离时,ReprojectTo3D出来的X / W, Y / W, Z / W都要乘以16(也就是W除以16),才能得到正确的三维坐标信息。
	xyz = xyz * 16;
	Mat color(disp.size(), CV_8UC3);
	GenerateFalseMap(disp8, color);//转成彩图
	imwrite("D:\\桌面文件\\save\\伪彩色视差图.jpg", color);

	waitKey(0);
}

After obtaining the disparity map, you can perform operations such as filtering on the disparity map. Without further research, you can verify whether the output distance is similar to the actual distance of the position by randomly clicking a few points in the disparity map.
Calculate distance from disparity map:

//4.通过视差图计算距离
float ObstacleRecongnition::compute_distance(Mat xyz, float up_x, float up_y, float down_x, float down_y) {
    
    
//在视差图中点几个点
	Point origin = Point(up_x, up_y);
	//通过双目参数将图像坐标进行转换
	Vec3f point3 = xyz.at<Vec3f>(origin);
	
	d = point3[0] * point3[0] + point3[1] * point3[1] + point3[2] * point3[2];
	d = sqrt(d);   //mm
	return d;
}

5. Calculate distance through feature point matching algorithm (recommended)

1) Use yolov5 to identify the images from the two cameras and obtain the corresponding target.
2) Match the feature points of the positioned image. Finally, you need to pass in the positioning x coordinates x_before1 and x_before2 of the original image before positioning to calculate the disparity value d. .
Insert image description here

3) Calculate the distance through the formula z=bf/d.
You can refer to ORB, SIFT, and SURF feature extraction algorithms to implement feature point extraction.
Take SURF as an example:
opencv programming based on GPU acceleration is used, and opencv454 is required to have a cuda version. Images from the left and right cameras are input and feature points are matched.

//surf特征匹配算法
float ObstacleRecongnition::surf_demo(Mat rectifyImageL,Mat rectifyImageR,int x_before1,int x_before2) {
    
    
	cv::cvtColor(rectifyImageL, rectifyImageL, CV_BGR2GRAY);
	cv::cvtColor(rectifyImageR, rectifyImageR, CV_BGR2GRAY);
	cv::cuda::printShortCudaDeviceInfo(cv::cuda::getDevice());
	cuda::SURF_CUDA surf(points_alg_surf);
	// detecting keypoints & computing descriptors
	cuda::GpuMat keypoints1GPU, keypoints2GPU;
	cuda::GpuMat descriptors1GPU, descriptors2GPU;

	cuda::GpuMat imageL, imageR;
	imageL.upload(rectifyImageL);
	imageR.upload(rectifyImageR);
	float dis;
	try {
    
    
		surf(imageL, cuda::GpuMat(), keypoints1GPU, descriptors1GPU);
		surf(imageR, cuda::GpuMat(), keypoints2GPU, descriptors2GPU);
		// matching descriptors
		Ptr<cv::cuda::DescriptorMatcher> matcher = cv::cuda::DescriptorMatcher::createBFMatcher(surf.defaultNorm());
		vector<DMatch> matches;
		matcher->match(descriptors1GPU, descriptors2GPU, matches);

		// downloading results
		vector<KeyPoint> keypoints1, keypoints2;
		vector<float> descriptors1, descriptors2;
		surf.downloadKeypoints(keypoints1GPU, keypoints1);
		surf.downloadKeypoints(keypoints2GPU, keypoints2);
		surf.downloadDescriptors(descriptors1GPU, descriptors1);
		surf.downloadDescriptors(descriptors2GPU, descriptors2);


		//筛选距离,不同特征点存在匹配错误的情况,可以根据实际情况调整
		//calculate the max and min distance between key points
		double maxdist = 0; double mindist = 100;
		for (int i = 0; i < descriptors1GPU.rows; i++)
		{
    
    
			double dist = matches[i].distance;
			if (dist < mindist)mindist = dist;
			if (dist > maxdist)maxdist = dist;
		}
		//cout << "Matching quantity:" << matches.size() << endl;

		//select the good match points
		vector<DMatch>goodMatches;
		for (int i = 0; i < descriptors1GPU.rows; i++)
		{
    
    
			if (matches[i].distance < 2 * mindist)
			{
    
    
				goodMatches.push_back(matches[i]);
			}
		}
		//cout << "Good matching quantity:" << goodMatches.size() << endl;

		//calculate parallax
		vector<parallax>para;
		for (int i = 0; i < goodMatches.size(); i++)
		{
    
    
			parallax temp;

			temp.leftX = keypoints1[goodMatches[i].queryIdx].pt.x + x_before1;
			temp.rightX = keypoints2[goodMatches[i].queryIdx].pt.x + x_before2;
			temp.paraValue = temp.leftX - temp.rightX;
			para.push_back(temp);
			cout << "No." << i + 1 << ":\t l_X ";
			cout << para[i].leftX << "\t r_X " << para[i].rightX;
			cout << "\t surf parallax " << para[i].paraValue << endl;
		}
		//sort(para.begin(), para.end(), ascendPara);
		int idxMedian = int(para.size() / 2);
		double paraMedian = para[idxMedian].paraValue;
		vector<parallax>::iterator it;
		for (it = para.begin(); it != para.end(); )
		{
    
    
			if (it->paraValue<((1 - errorRange)*paraMedian) || it->paraValue>((1 + errorRange)*paraMedian))
				it = para.erase(it);
			else
				it++;
		}
		cout << "Final data..." << endl;
		double paraSum = 0;
		double paraMean;
		for (int i = 0; i < para.size(); i++)
		{
    
    
			paraSum = paraSum + para[i].paraValue;
			cout << "No." << i << "\t" << para[i].paraValue << endl;

		}
		paraMean = paraSum / para.size();
		cout << "surf Parallax is " << paraMean << " pixel." << endl;
		//z = bf/d
		dis = (cameradis * camera_f) / (paraMean);

		// drawing the results
		Mat img_matches;
		drawMatches(Mat(imageL), keypoints1, Mat(imageR), keypoints2, matches, img_matches);
		cv::imwrite("D:\\桌面文件\\save\\surf_img.jpg", img_matches);
	}
	catch (...) {
    
    
		dis = 100;
	}
	//cout << "now dis:" << dis << endl;
	return dis;
}

6. Calculate distance through midpoint

The corrected image is directly recognized, and then the midpoint position of the target detection frame is used as the final result, and then the disparity is obtained and the distance is calculated.

7. References

1.https://blog.csdn.net/hhjoerrrr/article/details/124127143
2.https://blog.csdn.net/qq_40700822/article/details/124251201
3.https://blog.csdn.net/weixin_30072103/article/details/110549798
4.https://blog.csdn.net/Yong_Qi2015/article/details/107704269
5.https://blog.csdn.net/qinchang1/article/details/86934636
6.https://blog.csdn.net/dulingwen/article/details/104142149

Guess you like

Origin blog.csdn.net/llsplsp/article/details/127568389