Image Processing17(Affine Transformations )

Goal

In this tutorial you will learn how to:

  • Use the OpenCV function cv::warpAffine to implement simple remapping routines.
  • 使用OpenCV函数cv::warpAffine实现重映射
  • Use the OpenCV function cv::getRotationMatrix2D to obtain a 2×3 rotation matrix
  • 使用OpenCV函数cv::getRotationMatrix2D获得一个2*3的旋转矩阵

Theory

What is an Affine Transformation?

  1. A transformation that can be expressed in the form of a matrix multiplication (linear transformation) followed by a vector addition (translation).
  2. 一个仿射变换可以表达为一个矩阵乘法(线性变换)以及一个向量相加。
  3. From the above, we can use an Affine Transformation to express:

    1. Rotations (linear transformation)(旋转,线性变换)
    2. Translations (vector addition)(平移,向量相加)
    3. Scale operations (linear transformation)(缩放因子,线性变换)

    you can see that, in essence, an Affine Transformation represents a relation between two images.

        可以可出,实际上,一个仿射变换表示为两个图片的关系。

4.The usual way to represent an Affine Transformation is by using a 2×3 matrix.

仿射变换通常可以用一个2*3的矩阵表示。

A=[a00a10a01a11]2×2B=[b00b10]2×1

M=[AB]=[a00a10a01a11b00b10]2×3

Considering that we want to transform a 2D vector X=[xy] by using A and B, we can do the same with:

T=A[xy]+B or T=M[x,y,1]T

T=[a00x+a01y+b00a10x+a11y+b10]

How do we get an Affine Transformation?

  1. We mentioned that an Affine Transformation is basically a relation between two images. The information about this relation can come, roughly, in two ways:
  2. 上面提高了仿射变换是两个图片的关系。对于这种关系,可以通过大概的两种方式得到。

        We know both X and T and we also know that they are related. Then our task is to find M

        现在已经知道了X和T(注意和上面的对应关系),并且也知道了他们是关联的。然后我们的任务就是找到仿射矩阵M。

        We know M and X. To obtain T we only need to apply T=MX. Our information for M may be explicit (i.e. have the 2-by-3 matrix) or it can come as a geometric relation between points.

        知道了M和X。那么想要到底T,只要应用这个公式T=MX即可。那么关于M的信息就很明确了(2*3的矩阵),或者它可以通过点之间的几何关系计算得到。

Let's explain this in a better way (b). Since M relates 2 images, we can analyze the simplest case in which it relates three points in both images. Look at the figure below:

                                                           Warp_Affine_Tutorial_Theory_0.jpg

the points 1, 2 and 3 (forming a triangle in image 1) are mapped into image 2, still forming a triangle, but now they have changed notoriously. If we find the Affine Transformation with these 3 points (you can choose them as you like), then we can apply this found relation to all the pixels in an image.

以下面的这种方式进行解释。由于M涉及了两个图片,因此最简单的例子就是分析图像上的三个点。如上图所示。

点1,2和3(组成了三角形)被映射到了图片2,仍然形成了一个三角形,但他们确实变换较大。如果我们可以找到这三个点职期间的仿射变换关系,那么这种关系就可以应用到图像上的所有像素点。

Code

  1. What does this program do?
    • Loads an image(加载图像)
    • Applies an Affine Transform to the image. This transform is obtained from the relation between three points. We use the function cv::warpAffine  for that purpose.(使用仿射变换,仿射变换可以由图像上的三个点获得,利用cv::warpAffine实现这一点。)
    • Applies a Rotation to the image after being transformed. This rotation is with respect to the image center
    • 转换后将进行选择。这种选择对应与图像的中心。
    • Waits until the user exits the program(等待用户退出)
#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
#include <iostream>
using namespace cv;
using namespace std;
const char* source_window = "Source image";
const char* warp_window = "Warp";
const char* warp_rotate_window = "Warp + Rotate";
int main( int argc, char** argv )
{
  Point2f srcTri[3];
  Point2f dstTri[3];
  Mat rot_mat( 2, 3, CV_32FC1 );
  Mat warp_mat( 2, 3, CV_32FC1 );
  Mat src, warp_dst, warp_rotate_dst;
  CommandLineParser parser( argc, argv, "{@input | lena.jpg | input image}" );
  src = imread( parser.get<String>( "@input" ), IMREAD_COLOR );
  if( src.empty() )
  {
      cout << "Could not open or find the image!\n" << endl;
      cout << "Usage: " << argv[0] << " <Input image>" << endl;
      return -1;
  }
  warp_dst = Mat::zeros( src.rows, src.cols, src.type() );
  srcTri[0] = Point2f( 0,0 );
  srcTri[1] = Point2f( src.cols - 1.f, 0 );
  srcTri[2] = Point2f( 0, src.rows - 1.f );
  dstTri[0] = Point2f( src.cols*0.0f, src.rows*0.33f );
  dstTri[1] = Point2f( src.cols*0.85f, src.rows*0.25f );
  dstTri[2] = Point2f( src.cols*0.15f, src.rows*0.7f );
  warp_mat = getAffineTransform( srcTri, dstTri ); //得到变换矩阵
  warpAffine( src, warp_dst, warp_mat, warp_dst.size() ); //输入,输出,变幻矩阵,图像输出的大小
  Point center = Point( warp_dst.cols/2, warp_dst.rows/2 );
  double angle = -50.0;
  double scale = 0.6;
  rot_mat = getRotationMatrix2D( center, angle, scale ); //得到选择矩阵:中心,角度,缩放因子
  warpAffine( warp_dst, warp_rotate_dst, rot_mat, warp_dst.size() ); //输入,输出,矩阵,大小
  namedWindow( source_window, WINDOW_AUTOSIZE );
  imshow( source_window, src );
  namedWindow( warp_window, WINDOW_AUTOSIZE );
  imshow( warp_window, warp_dst );
  namedWindow( warp_rotate_window, WINDOW_AUTOSIZE );
  imshow( warp_rotate_window, warp_rotate_dst );
  waitKey(0);
  return 0;

}

CMakeLists.txt

cmake_minimum_required(VERSION 2.8)

set(CMAKE_CXX_FLAGS "-std=c++11")
project( DisplayImage )
find_package( OpenCV REQUIRED )
include_directories( ${OpenCV_INCLUDE_DIRS} )
add_executable( DisplayImage main.cpp )
target_link_libraries( DisplayImage ${OpenCV_LIBS} )


install(TARGETS DisplayImage RUNTIME DESTINATION bin

Results


猜你喜欢

转载自blog.csdn.net/qq_27806947/article/details/80305379