Gazebo下借助ORB_SLAM2进行自定义相机参数的双目视觉仿真

首先,在Gazebo下建立自定义的相机模型,这里借助了ORB_SLAM2开源项目完成。一开始听说该项目原生支持双目视觉(Stereo Camera),这里看了许多BLOG和文档参考,记录一下实现的流程。

这里先写一下我的配置环境:
1.Ubuntu16.04
2.ROS-kinetic
3.Gazebo9
4.workspace下需要配置OpenCV
5.Eigin3
6.Pangolin

配置ORB_SLAM2

这里我的参考是简版链接详细链接后者的更加详尽。以及官方配置官方链接
步骤简洁方便,这里我大概总结一下:
(1)环境的配置
在你的工作空间的src文件下下载机器人仿真环境(由中科院软件所孵化的北京中科重德智能科技有限公司提供的机器人仿真环境),并且安装依赖项和刷新环境

        	#下载仿真环境
 	git clone https://github.com/DroidAITech/ROS-Academy-for-Beginners.git
 	
 	
 		#工作空间根目录下运行
 	rosdep install --from-paths src --ignore-src --rosdistro=kinetic -y
 		
 	
 		#编译并更新环境
 	 catkin_make
 	echo "source ~/catkin_ws/devel/setup.bash" >>	~/.bashrc
     source ~/.bashrc

(2)安装编译功能包和依赖项

      #安装链接库
		 sudo apt-get install libboost-all-dev libblas-dev liblapack-dev
		 
	  #在工作空间src目录下下载ORB_SLAM2源码包
		 git clone https://github.com/raulmur/ORB_SLAM2.git

安装Pangolin参考:安装Pangolin链接

安装Eigin3参考:安装Eigin3链接

	  #编译ORB_SLAM2功能包
	  		#修改环境变量,在本窗口临时生效
		 export ROS_PACKAGE_PATH=${ROS_PACKAGE_PATH}:~/catkin_ws		/src/ORB_SLAM2/Examples/ROS
		 cd ORB_SLAM2
		chmod +x build.sh
		./build.sh
		chmod +x build_ros.sh
		 ./build_ros.sh

		 cd 
		 gedit .bashrc
		#在文件末尾加上:(始终生效)
		export ROS_PACKAGE_PATH=${ROS_PACKAGE_PATH}:~/catkin_ws/src/ORB_SLAM2/Examples/ROS
		//安装依赖
		 sudo apt-get -y install libopencv-dev build-essential cmake git libgtk2.0-dev pkg-config python-dev python-numpy libdc1394-22 libdc1394-22-dev libjpeg-dev libpng12-dev libjasper-dev libavcodec-dev libavformat-dev libswscale-dev libgstreamer0.10-dev libgstreamer-plugins-base0.10-dev libv4l-dev libtbb-dev libqt4-dev libfaac-dev libmp3lame-dev libopencore-amrnb-dev libopencore-amrwb-dev libtheora-dev libvorbis-dev libxvidcore-dev x264 v4l-utils unzip 

(3)启动和测试

		修改ORB_SLAM2/Examples/ROS/ORB_SLAM2/src/ros_rgbd.cc源文件
	    message_filters::Subscriber<sensor_msgs::Image> depth_sub(nh, "camera/depth_registered/image_raw", 1);  
	    #depth_registered 改成depth
message_filters::Subscriber<sensor_msgs::Image> depth_sub(nh, "camera/depth/image_raw", 1);
			#新终端运行,功能节点
		 roslaunch orbslam2_demo ros_orbslam2.launch
				
				


			#新终端运行,机器人仿真环境
		 roslaunch robot_sim_demo robot_spawn.launch
				
				

			#新终端运行,控制节点
		 rosrun robot_sim_demo robot_keyboard_teleop.py

此时如果出现点云图和摄像头画面即为配置成功

相机参数的修改方法

流程解释

我们大部分的相机参数修改都在工作空间的src/ROS-Academy-for-Beginners这个包下,下面是两种实现的流程图(合并图和左右图)


还是先给出参考链接:
相机参数修改方法1
相机参数修改方法2

这里我的需求是单纯的需求双目图或者合并图,所以SLAM建图部分可以暂时忽略,在运行SLAM的时候会自动显示相机黑白视图,也可新建pkg读取左右彩色图。

这里只写出双目实现的步骤和参数修改:
首先要找到../…/src/ROS-Academy-for-Beginners/robot_sim_demo/urdf/xbot-u.gazebo
(如果找不到的话就在工作空间下搜索xbot-u这个文件储存着机器人的传感器信息,我们需要将这里的相机模型替换成双目相机模型)

替换双目相机模型文件

找到xbot-u.gazebo中的相机部分(如果怕修改失误或者第一次修改建议先对该文件备份):

 <!-- camera -->
    <gazebo reference="camera_link">  
      <sensor type="depth" name="camera">
        <plugin name="kinect_camera_controller" filename="libgazebo_ros_openni_kinect.so">
          <cameraName>camera</cameraName>
          <alwaysOn>true</alwaysOn>
          <updateRate>10</updateRate>
          <imageTopicName>rgb/image_raw</imageTopicName>
          <depthImageTopicName>depth/image_raw</depthImageTopicName>
          <pointCloudTopicName>depth/points</pointCloudTopicName>
          <cameraInfoTopicName>rgb/camera_info</cameraInfoTopicName>
          <depthImageCameraInfoTopicName>depth/camera_info</depthImageCameraInfoTopicName>
          <frameName>camera_link_optical</frameName>
          <baseline>0.1</baseline>
          <distortion_k1>0.0</distortion_k1>
          <distortion_k2>0.0</distortion_k2>
          <distortion_k3>0.0</distortion_k3>
          <distortion_t1>0.0</distortion_t1>
          <distortion_t2>0.0</distortion_t2>
          <pointCloudCutoff>0.4</pointCloudCutoff>
        </plugin>
      </sensor>
    </gazebo>
 

替换为:

<!-- stereo camera -->
  <gazebo reference="camera_link">
      <sensor type="multicamera" name="stereo_camera">
        <update_rate>20.0</update_rate>
        <camera name="left">
          <horizontal_fov>1.3962634</horizontal_fov>
          
          <clip>
            <near>0.02</near>
            <far>300</far>
          </clip>
          <noise>
            <type>gaussian</type>
            <mean>0.0</mean>
            <stddev>0.007</stddev>
          </noise>
        </camera>
        <camera name="right">
          <pose>0 -0.07 0 0 0 0</pose>
          <horizontal_fov>1.3962634</horizontal_fov>
          
          <clip>
            <near>0.02</near>
            <far>300</far>
          </clip>
          <noise>
            <type>gaussian</type>
            <mean>0.0</mean>
            <stddev>0.007</stddev>
          </noise>
        </camera>
        <plugin name="stereo_camera_controller" filename="libgazebo_ros_multicamera.so">
          <alwaysOn>true</alwaysOn>
          <updateRate>0.0</updateRate>
          <cameraName>camera</cameraName>
          <imageTopicName>image_raw</imageTopicName>
          <cameraInfoTopicName>camera_info</cameraInfoTopicName>
          <frameName>camera_link</frameName>
          <hackBaseline>0.07</hackBaseline>
          <distortionK1>0.0</distortionK1>
          <distortionK2>0.0</distortionK2>
          <distortionK3>0.0</distortionK3>
          <distortionT1>0.0</distortionT1>
          <distortionT2>0.0</distortionT2>
        </plugin>
      </sensor>
    </gazebo>

双目相机模型参数简单解释

下图解释了这段代码的相机参数(按需修改):
双目相机获取左右图的修改

合并和左右图解释

前面的流程图中提到了双目的两种仿真模式:
1.合并图(包含建图,黑白)
2.左右图(需要自己写包获取左右图)

上面这个步骤是修改了相机模型,我们写包是提取这个模型的参数,所以上面这些参数决定了左右图的双目参数。合并图的参数则是在另一个参数文件里修改,因为是合并图所以这里的修改只当作是将原先的MONO单目或者RGB-D深度相机换成了换成了双目相机。
所以接下来会提到两种显示方法,相机参数则是在不同的文件中修改,如果单纯的想要获取左右图那就注意上面的参数解释就可以了。

合并图获取方法

这里首先创建新文本(命名为Stereo_setting.yaml),写一个双目参数文件(参考相机参数修改方法1

该文件写好之后放入…/…/src/ORB_SLAM2-master/Examples/Stereo文件下即可。

文本内容:

%YAML:1.0

#--------------------------------------------------------------------------------------------
# Camera Parameters. Adjust them!
#--------------------------------------------------------------------------------------------

# Camera calibration and distortion parameters (OpenCV) 
Camera.fx: 277.19135641132203
Camera.fy: 277.19135641132203
Camera.cx: 160.5
Camera.cy: 120.5

Camera.k1: 0.0
Camera.k2: 0.0
Camera.k3: 0.0
Camera.p1: 0.0
Camera.p2: 0.0

Camera.width: 752
Camera.height: 480

# Camera frames per second 
Camera.fps: 20.0

# stereo baseline times fx
Camera.bf: 47.90639384423901

# Color order of the images (0: BGR, 1: RGB. It is ignored if images are grayscale)
Camera.RGB: 1

# Close/Far threshold. Baseline times.
ThDepth: 35

#--------------------------------------------------------------------------------------------
# Stereo Rectification. Only if you need to pre-rectify the images.
# Camera.fx, .fy, etc must be the same as in LEFT.P
#--------------------------------------------------------------------------------------------
LEFT.height: 480
LEFT.width: 752
LEFT.D: !!opencv-matrix
   rows: 1
   cols: 5
   dt: d
   data:[-0.28340811, 0.07395907, 0.00019359, 1.76187114e-05, 0.0]
LEFT.K: !!opencv-matrix
   rows: 3
   cols: 3
   dt: d
   data: [458.654, 0.0, 367.215, 0.0, 457.296, 248.375, 0.0, 0.0, 1.0]
LEFT.R:  !!opencv-matrix
   rows: 3
   cols: 3
   dt: d
   data: [0.999966347530033, -0.001422739138722922, 0.008079580483432283, 0.001365741834644127, 0.9999741760894847, 0.007055629199258132, -0.008089410156878961, -0.007044357138835809, 0.9999424675829176]
LEFT.P:  !!opencv-matrix
   rows: 3
   cols: 4
   dt: d
   data: [435.2046959714599, 0, 367.4517211914062, 0,  0, 435.2046959714599, 252.2008514404297, 0,  0, 0, 1, 0]

RIGHT.height: 480
RIGHT.width: 752
RIGHT.D: !!opencv-matrix
   rows: 1
   cols: 5
   dt: d
   data:[-0.28368365, 0.07451284, -0.00010473, -3.555907e-05, 0.0]
RIGHT.K: !!opencv-matrix
   rows: 3
   cols: 3
   dt: d
   data: [457.587, 0.0, 379.999, 0.0, 456.134, 255.238, 0.0, 0.0, 1]
RIGHT.R:  !!opencv-matrix
   rows: 3
   cols: 3
   dt: d
   data: [0.9999633526194376, -0.003625811871560086, 0.007755443660172947, 0.003680398547259526, 0.9999684752771629, -0.007035845251224894, -0.007729688520722713, 0.007064130529506649, 0.999945173484644]
RIGHT.P:  !!opencv-matrix
   rows: 3
   cols: 4
   dt: d
   data: [435.2046959714599, 0, 367.4517211914062, -47.90639384423901, 0, 435.2046959714599, 252.2008514404297, 0, 0, 0, 1, 0]

#--------------------------------------------------------------------------------------------
# ORB Parameters
#--------------------------------------------------------------------------------------------

# ORB Extractor: Number of features per image
ORBextractor.nFeatures: 1200

# ORB Extractor: Scale factor between levels in the scale pyramid   
ORBextractor.scaleFactor: 1.2

# ORB Extractor: Number of levels in the scale pyramid  
ORBextractor.nLevels: 8

# ORB Extractor: Fast threshold
# Image is divided in a grid. At each cell FAST are extracted imposing a minimum response.
# Firstly we impose iniThFAST. If no corners are detected we impose a lower value minThFAST
# You can lower these values if your images have low contrast           
ORBextractor.iniThFAST: 20
ORBextractor.minThFAST: 7

#--------------------------------------------------------------------------------------------
# Viewer Parameters
#--------------------------------------------------------------------------------------------
Viewer.KeyFrameSize: 0.05
Viewer.KeyFrameLineWidth: 1
Viewer.GraphLineWidth: 0.9
Viewer.PointSize:2
Viewer.CameraSize: 0.08
Viewer.CameraLineWidth: 3
Viewer.ViewpointX: 0
Viewer.ViewpointY: -0.7
Viewer.ViewpointZ: -1.8
Viewer.ViewpointF: 500

接下来编写launch文件,orbslam2_stereo.launch:
方法也是新建文档改命名即可,这里这个launch文件为了调用orb本身的环境和我们刚刚写好的相机参数,文件内容:

该文件放到ORB_SLAM2的launch文件包下即可:

<launch>
    <arg name="PATH_TO_VOCABULARY" value="$(find ORB_SLAM2)/vocabulary_files/ORBvoc.txt"/>
    <arg name="PATH_TO_SETTINGS_FILE" value="$(find ORB_SLAM2)/setting_files/Stereo_setting.yaml"/>
    
    <node name="Stereo" pkg="ORB_SLAM2" type="Stereo" args="$(arg PATH_TO_VOCABULARY) $(arg PATH_TO_SETTINGS_FILE) false">
    </node>

</launch>

写好这些之后就可以运行了:

roslaunch ORB_SLAM2 orbslam2_stereo.launch

如果没成功看看是不是没有写入环境变量或者没打开roscore,至此合并图的获得方法写完了。
取自方法链接2上图是运行效果,取自相机参数修改方法2

左右图的获取

左右图的获取需要新写一个pkg完成,即在工作空间下src新建一个pkg,或者找一个编译好opencv的pkg,因为调用双目图用到了opencv.
这里还是先放出左右图程序参考链接

新建工作空间完成该操作也可以(如果以及有这样的工作空间,或者熟练操作者请忽略这一步)详细步骤链接,以上是创建opencv工作空间步骤,那么我们现在有一个opencv的工作空间。
上面的链接中程序是读取一个图片,那么我们新写一个程序让其读取,下面是程序内容:

该程序写在文档里保存为cpp文件,我的文件命名:opencvtest.cpp,放入工作空间src/下新建的包即可;

#include <ros/ros.h>  
#include<sensor_msgs/image_encodings.h> //ROS图象类型的编码函数
#include<image_transport/image_transport.h> //用来在ROS系统中的话题上发布和订阅图象消息
 
//OpenCV2标准头文件
#include<opencv2/core/core.hpp>
#include<opencv2/highgui/highgui.hpp>
#include<opencv2/imgproc/imgproc.hpp>
#include<cv_bridge/cv_bridge.h>     //cv_bridge中包含CvBridge库
 
#include<iostream> //C++标准输入输出库
using namespace std;
 
//消息订阅回调函数
void subLeftImage_cb(const sensor_msgs::ImageConstPtr& msg)
{
    try
    {
        cout << " image_L receive success " << endl;
        cv::imshow("image_l",cv_bridge::toCvShare(msg,"bgr8")->image);
        cv::waitKey(10);//因为highgui处理来自cv::imshow()的绘制请求需要时间 10代表10ms
    }
    catch (cv_bridge::Exception& e)
    {
        cout << "Could not convert from " << msg->encoding.c_str() << "to 'brg8'." << endl;
    }   
}
void subRightImage_cb(const sensor_msgs::ImageConstPtr& msg)
{
    try
    {
        cout << " image_R receive success " << endl;
        cv::imshow("image_r",cv_bridge::toCvShare(msg,"bgr8")->image);
        cv::waitKey(10);//因为highgui处理来自cv::imshow()的绘制请求需要时间 10代表10ms
    }
    catch (cv_bridge::Exception& e)
    {
        cout << "Could not convert from " << msg->encoding.c_str() << "to 'brg8'." << endl;
    }   
}
int main(int argc, char *argv[])
{
    ros::init(argc, argv, "subscribImage");
 
    ros::NodeHandle nh;//创建句柄
    image_transport::ImageTransport it(nh);
 
    image_transport::Subscriber sub_L;
    image_transport::Subscriber sub_R;
 
    //设置订阅主题 camera/image
    sub_L = it.subscribe("stereocamera/left/image_raw",1,subLeftImage_cb);
    sub_R = it.subscribe("stereocamera/right/image_raw",1,subRightImage_cb);
 
    ros::spin();
 
    return 0;
}

最后运行程序和仿真:

rlaunch robot_sim_demo robot_spawn.launch
rosrun 你的工作空间下的opencv包 写的程序名称
//这里我按照教程做下来的命名应该是 rosrun opencv_test opencvtest

接下来就可以看到彩色的双目图了,注意再说一次这里的双目图的相机参数是修改xbot-u.gazebo文件的参数。

效果图如下:(这里有我导入的其他模型,双目程序正常运行的话就是两个视图框和gazebo即可)
至此相机模型的建立完成,可以自定义参数的双目相机模型左右图和合并图获取完成。

猜你喜欢

转载自blog.csdn.net/weixin_44502784/article/details/105699757