Robot Learning Project - Project1: Go Chase it! (5)

13. Simulate a white ball

Before proceeding to write the process_image client node, a white ball must be modeled and placed in the Gazebo World scene.

The modeled white ball will control its position in Gazebo and place it in different positions in front of the robot's camera. The process_image client node will be responsible for analyzing the image of the robot and requesting service from the server drive_bot node to drive the robot to the white ball.

Now, let's move on to modeling the white ball using the Model Editor tool in Gazebo!

model editor

Here are tips on how to open the model editor:

$ gazebo # then Edit-> Model Editor

insert sphere

Under the Simple Shapes menu in the Model Editor Tools, click on a sphere and insert it anywhere in the scene.

edit size

Double click on the sphere and change its Radius to 0.1 in Visual and Collision .

change color

To change the color of the ball to white, set its Visual Environment, Diffuse, Specular and Emissive RGBA values ​​to 1.

save/insert sphere

Save the white ball model as my_ball in the /home/workspace directory. Then exit the model editor tool, return to the main interface of Gazebo, click "Insert (insert)", and place the white ball anywhere in the scene.

restart node

Now that the white ball has been modeled, restart the node in world.launch. Then verify that my_ball can be inserted anywhere in myworld.

keep

Place the white ball anywhere outside the building so the robot can't see it. Then, save a copy of this new world under /home/workspace/catkin_ws/src/my_robot/worlds, replacing the old my.world file. When starting this newly saved world file, the built environment and the white ball can be seen.

Summary: Follow the steps below to model a white ball and incorporate it into the world scene:

14. ROS node: process_image

The second node written in this project is the process_image node. This client node will subscribe to the robot's camera images and analyze them to determine the position of the white ball. Once the position of the ball is determined, the client node will request a service from the drive_bot server node to drive the robot in the direction of the ball. The robot can drive left, right or forward, depending on where the robot is in the image.

After writing this node, place the white ball in front of the robot's camera. If all goes well, the node should analyze the image, detect the position of the ball, and then request a ball_chaser/command_robot service to drive the robot towards the white ball!

The /camera/rgb/image_raw topic must be subscribed to get instantaneous images from the robot's camera. Inside the callback function, retrieve the image via the read image data message. Image messages contain many fields such as image height, data, etc. Check out the full ROS sensor_msgs/Image documentation ( http://docs.ros.org/en/melodic/api/sensor_msgs/html/msg/Image.html ) . Once we have the image message, we must loop through the image data. For each pixel, compare it to a value of 255 representing a bright white pixel, and if you find the pixel, try to identify which part of the image it is in, be it left, center or right. Then, requesting a service drives the robot in that direction.

写process_image.cpp

Now it's time to write the process_image.cpp client node. This node will analyze the image and request services to drive the robot. Create a source code file in the src directory of the ball_chaser package, a complete code is attached below, and there are several hints to help complete the implementation:

#include"ros/ros.h"
#include"ball_chaser/DriveToTarget.h"
#include<sensor_msgs/Image.h>

// Define a global client that can request services
ros::ServiceClient client;

// This function calls the command_robot service to drive the robot in the specified direction
void drive_robot(float lin_x, float ang_z)
{
    // TODO: Request a service and pass the velocities to it to drive the robot
}

// This callback function continuously executes and reads the image datavoidprocess_image_callback(const sensor_msgs::Image img)
{

    int white_pixel = 255;

    // TODO: Loop through each pixel in the image and check if there's a bright white one
    // Then, identify if this pixel falls in the left, mid, or right side of the image
    // Depending on the white ball position, call the drive_bot function and pass velocities to it
    // Request a stop when there's no white ball seen by the camera
}

int main(int argc, char** argv)
{
    // Initialize the process_image node and create a handle to it
    ros::init(argc, argv, "process_image");
    ros::NodeHandle n;

    // Define a client service capable of requesting services from command_robot
    client = n.serviceClient<ball_chaser::DriveToTarget>("/ball_chaser/command_robot");

    // Subscribe to /camera/rgb/image_raw topic to read the image data inside the process_image_callback function
    ros::Subscriber sub1 = n.subscribe("/camera/rgb/image_raw", 10, process_image_callback);

    // Handle ROS communication events
    ros::spin();

    return0;
}

Copy this code into process_image.cpp, and make the necessary changes.

Edit CMakeLists.txt

In addition to all dependencies previously added for drive_bot.cpp, here are the dependencies that should be added for process_image.cpp:

  • Add add_executable

  • add target_link_libraries

  • add add_dependencies

build pack

Now that you have included specific instructions for the process_image.cpp code in CMakeLists.txt, compile it with:

$ cd /home/workspace/catkin_ws/
$ catkin_make

startup file

Edit the ball_chaser.launch file saved under /home/workspace/catkin_ws/src/ball_chaser/ Launch and add the process_image node to it. Now, launching this file should run drive_bot and process_image!

test-process_image

To test that the code you just wrote works as expected, first start the robot in the environment, then run the drive_bot and process_image nodes.

1) Start the robot in myworld

By launching the world.launch file:

$ cd /home/workspace/catkin_ws/
$ source devel/setup.bash
$ roslaunch my_robot world.launch

2) Run drive_bot and process_image

By executing ball_chase.launch:

$ cd /home/workspace/catkin_ws/
$ source devel/setup.bash
$ roslaunch ball_chaser ball_chaser.launch

3) Visualization

To visualize the robot's camera image, subscribe to the Camera RGB Image topic from RViz. Or run the rqt_image_view node:

$ cd /home/workspace/catkin_ws/
$ source devel/setup.bash
$ rosrun rqt_image_view rqt_image_view  

Now place the white ball in different positions in front of the robot to see if the robot is capable of chasing the ball!

Summary: Follow the steps below to create a process_image node:

Note: process_image reference code is as follows ---

#include"ros/ros.h"
#include"ball_chaser/DriveToTarget.h"
#include<sensor_msgs/Image.h>
using namespace std;

// Define a global client that can request services
ros::ServiceClient client;

// This function calls the command_robot service to drive the robot in the specified direction
void drive_robot(float lin_x, float ang_z)
{
    ROS_INFO_STREAM("Driving the robot");

    // TODO: Request a service and pass the velocities to it to drive the robot
    ball_chaser::DriveToTarget srv;

    srv.request.linear_x = lin_x;
    srv.request.angular_z = ang_z;

    // Call the command_robot service and pass the requested motor commands
    if (!client.call(srv))
    {
        ROS_ERROR("Failed to call service command_robot");
    }
}

// This callback function continuously executes and reads the image data
void process_image_callback(const sensor_msgs::Image img)
{

    int white_pixel = 255;

    // TODO: Loop through each pixel in the image and check if there's a bright white one
    // Then, identify if this pixel falls in the left, mid, or right side of the image
    // Depending on the white ball position, call the drive_bot function and pass velocities to it
    // Request a stop when there's no white ball seen by the camera
    int i;
    
    int ball_position;
    int ball_position_center;
    int ball_position_sum = 0;
    int white_pixel_num = 0;
    
    

    for (i = 0; i + 2 < img.data.size(); i = i + 3)
    {
        if ((img.data[i] == 255) && (img.data[i+1] == 255) && (img.data[i+2] == 255))
        {
            ball_position = (i % (img.width * 3)) / 3;
            ball_position_sum += ball_position;
            white_pixel_num++;
        }
    }
    
    if (white_pixel_num == 0)
    {
        drive_robot(0, 0);
    }
    else
    {
        ball_position_center = ball_position_sum / white_pixel_num;
        
        if(ball_position_center < img.width / 3)
        {
            drive_robot(0.1, 0.5);
        }
        else if(ball_position_center > img.width * 2 / 3)
        {
            drive_robot(0.1, -0.5);
        }
        else
        {
            drive_robot(0.1, 0);
        }
    }
}

int main(int argc, char** argv)
{
    // Initialize the process_image node and create a handle to it
    ros::init(argc, argv, "process_image");
    ros::NodeHandle n;

    // Define a client service capable of requesting services from command_robot
    client = n.serviceClient<ball_chaser::DriveToTarget>("/ball_chaser/command_robot");

    // Subscribe to /camera/rgb/image_raw topic to read the image data inside the process_image_callback function
    ros::Subscriber sub1 = n.subscribe("/camera/rgb/image_raw", 10, process_image_callback);

    // Handle ROS communication events
    ros::spin();

    return0;
}

Guess you like

Origin blog.csdn.net/jeffliu123/article/details/129783839