MATLAB简单机器人视觉控制(仿真1)

1、前记:在MATLAB论坛下载了个源码(具体如下),主要功能是利用MATLAB打开摄像头识别红色物体,获取坐标值,然后传给有robotics toolbox 建立的机器人使其模型运动。

2、源码如下:(down下来的代码运行时出错)看别人在YouTube上传的视频没有错啊!

clear all;
t3r=[0 1 0 0;0 1 0 0;0 1 0 0];
r3bot=robot(t3r);
a = imaqhwinfo;
%[camera_name, camera_id, format] = getCameraInfo(a);
 f1=figure;
 f2=figure;
% Capture the video frames using the videoinput function
% You have to replace the resolution & your installed adaptor name.
vid = videoinput('winvideo',1);
%sls=videoinput('winvideo',1)
% Set the properties of the video object
set(vid, 'FramesPerTrigger', Inf);
set(vid, 'ReturnedColorspace', 'rgb')
vid.FrameGrabInterval = 1;

%start the video aquisition here
start(vid)
n=50;
% Set a loop that stop after 100 frames of aquisition
while(vid.FramesAcquired<=500)
    % Get the snapshot of the current frame
    data = getsnapshot(vid);
    % Now to track red objects in real time
    % we have to subtract the red component 
    % from the grayscale image to extract the red components in the image.
    diff_im = imsubtract(data(:,:,1), rgb2gray(data));
    %Use a median filter to filter out noise
    diff_im = medfilt2(diff_im, [3 3]);
    % Convert the resulting grayscale image into a binary image.
    diff_im = im2bw(diff_im,0.18);
    % Remove all those pixels less than 300px
    diff_im = bwareaopen(diff_im,300); 
    % Label all the connected components in the image.
    bw = bwlabel(diff_im, 8);
    % Here we do the image blob analysis.
    % We get a set of properties for each labeled region.
    stats = regionprops(bw, 'BoundingBox', 'Centroid');
    % Display the image
    figure(f1)
    imshow(data)
    %This is a loop to bound the red objects in a rectangular box.
    for object = 1:length(stats)
      
        bb = stats(object).BoundingBox;
        bc = stats(object).Centroid;
        rectangle('Position',bb,'EdgeColor','r','LineWidth',2);
        a=text(bc(1)+15,bc(2), strcat('X: ', num2str(round(bc(1))), '    Y: ', num2str(round(bc(2)))));
        set(a, 'FontName', 'Arial', 'FontWeight', 'bold', 'FontSize', 12, 'Color', 'yellow');
       
   q=[60+bc(1),30-bc(2),45+bc(1)]*pi/180;
 t=fkine(r3bot,q);
 q1=ikine(r3bot,t,[1,-5,1]);
 q2=ikine(r3bot,t,[1,-.5,1]);
figure(f2)
 plot(r3bot,q1);
 plot(r3bot,q1);
    end     
%         %%%%%%%%%%%%%%%%%%%Robotics%%%%%%%%%%%%%
end
% Both the loops end here.

% Stop the video aquisition.
stop(vid);

% Flush all the image data stored in the memory buffer.
%flushdata(vid);

% Clear all variables
%clear all
%sprintf('%s','That was all about Image tracking, Guess that was pretty easy :) ')

YouTube视频截图;

3、后记:强烈希望有大神对疑惑给出解释。

不知道问题具体出在哪,在解决过程中重新建立了一个机器人模型,在Robotic部分修改了些参数格式(代码如下),发现可以是机器人模型运动,但是有些卡顿。我怀疑是逆解速度慢导致的,也有可能是因为摄像机获取每一帧图像的时间延迟。所以这里记下来,希望感兴趣的朋友们大家可以交流交流,如何提高逆解运算速度,有哪些优化手段可以解决这类问题?下面的动图也可看出机械臂模型运动不优美。。。

建立机器人模型部分

L1=Link([0       0.4      0.025    pi/2      0     ]); 
L2=Link([pi/2    0        0.56     0         0     ]);
L3=Link([0       0        0.035    pi/2      0     ]);
L4=Link([0       0.515    0        pi/2      0     ]);
L5=Link([pi      0        0        pi/2      0     ]);
L6=Link([0       0.08     0        0         0     ]);
t3r=[L1;L2;L3;L4;L5;L6];
bot=SerialLink(t3r,'name','Useless');
a = imaqhwinfo;

逆解参数格式修改

 q=[bc(1),bc(2),bc(1),1,1,1]*pi/180;
 t=fkine(bot,q);
 q1=ikine(bot,t);
 figure(f2)
 plot(bot,q1);

猜你喜欢

转载自blog.csdn.net/weixin_39090239/article/details/81218271