OpenCV之Feature Matching

Brute-Force matches

BFmatcher (Brute-Force Matching) brute force matching, using BFMatcher.knnMatch() function to perform core matching, knnMatch (k-nearest neighbor classification) k-nearest neighbor classification algorithm.
The kNN algorithm finds the k records closest to the new data from the training set, and then determines the category of the new data according to their main classification. The algorithm involves three main factors: training set, distance or similar measure, and the size of k. The core idea of ​​the kNN algorithm is that if most of the k nearest samples in the feature space of a sample belong to a certain category, the sample also belongs to this category and has the characteristics of the samples in this category. In determining the classification decision, this method only determines the category of the sample to be classified based on the category of the nearest one or several samples.
The kNN method is only related to a very small number of adjacent samples in class decision-making. Since the kNN method mainly relies on the surrounding limited nearby samples, rather than the method of discriminating the class domain to determine the category, the kNN method is better than other methods for the sample set to be divided with more cross or overlap of the class domain. For fit.
It has been verified that BFmatcher will spend a lot of time in matching.

If you use bf=cv2.BFMatcher(crossCheck=Ture), it means that the two feature points must match each other. For example, the feature point A of the first image matches the feature point B of the second image, but B matches the first image If the feature points in C match, then they do not meet the condition of crossCheck=True.

matches=bf.knnMatch(des1,des2) represents the best match of k pairs. You can also directly use matches=bf.match(des1,des2) to indicate one-to-one matching.

import numpy as np
import cv2
from matplotlib import pyplot as plt

imgname1 = '路径+检测图片的名字1.jpg'
imgname2 = '路径+检测图片的名字.jpg'

sift = cv2.xfeatures2d.SIFT_create()

img1 = cv2.imread(imgname1)
gray1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY) #灰度处理图像
kp1, des1 = sift.detectAndCompute(img1,None)   #des是描述子

img2 = cv2.imread(imgname2)
gray2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)#灰度处理图像
kp2, des2 = sift.detectAndCompute(img2,None)  #des是描述子

hmerge = np.hstack((gray1, gray2)) #水平拼接
cv2.imshow("gray", hmerge) #拼接显示为gray
cv2.waitKey(0)

img3 = cv2.drawKeypoints(img1,kp1,img1,color=(255,0,255)) #画出特征点,并显示为红色圆圈
img4 = cv2.drawKeypoints(img2,kp2,img2,color=(255,0,255)) #画出特征点,并显示为红色圆圈
hmerge = np.hstack((img3, img4)) #水平拼接
cv2.imshow("point", hmerge) #拼接显示为gray
cv2.waitKey(0)
# BFMatcher解决匹配
bf = cv2.BFMatcher()
matches = bf.knnMatch(des1,des2, k=2)
# 调整ratio,
good = []
for m,n in matches:
    if m.distance < 0.75*n.distance:
        good.append([m])

img5 = cv2.drawMatchesKnn(img1,kp1,img2,kp2,matches,None,flags=2)
cv2.imshow("BFmatch", img5)
cv2.waitKey(0)
cv2.destroyAllWindows()

result: 

FLANN Matches

FLANN (Fast_Library_for_Approximate_Nearest_Neighbors) fast nearest neighbor search package, it is a collection of nearest neighbor search algorithms for large data sets and high-dimensional features, and these algorithms have been optimized. It is better than BFMatcher when facing large data sets.
It has been verified that FLANN is 10 times faster than other nearest neighbor search software. To use FLANN matching, we need to pass in two dictionaries as parameters. These two are used to determine the algorithm to be used and other related parameters.
The first is IndexParams.
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5).
The KTreeIndex configuration index is used here to specify the number of nuclear density trees to be processed (the ideal number is 1-16).
The second dictionary is SearchParams.
search_params = dict(checks=100) Use it to specify the number of recursive traversals. The higher the value, the more accurate the result, but the more time it takes. In fact, the matching effect largely depends on the input.
5kd-trees and 50checks can always achieve reasonable accuracy and can be completed in a short time. In the code below, discarding any value with a distance greater than 0.7 can avoid almost 90% of false matches, but good matching results will be rare.
 

import numpy as np
import cv2
from matplotlib import pyplot as plt

imgname1 = '路径+名字.jpg'
imgname2 = '路径+名字.jpg'

surf = cv2.xfeatures2d.SURF_create()

#设置参数
FLANN_INDEX_KDTREE = 0
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks=50)
flann = cv2.FlannBasedMatcher(index_params,search_params)

#图像预处理
img1 = cv2.imread(imgname1)
gray1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY) #灰度处理图像
kp1, des1 = surf.detectAndCompute(img1,None)#des是描述子

img2 = cv2.imread(imgname2)
gray2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)
kp2, des2 = surf.detectAndCompute(img2,None)
#展示直接水平拼接的图像
hmerge = np.hstack((gray1, gray2)) #水平拼接
cv2.imshow("gray", hmerge) #拼接显示为gray
cv2.waitKey(0)

img3 = cv2.drawKeypoints(img1,kp1,img1,color=(255,0,255))
img4 = cv2.drawKeypoints(img2,kp2,img2,color=(255,0,255))

hmerge = np.hstack((img3, img4)) #水平拼接
cv2.imshow("point", hmerge) #拼接显示为gray
cv2.waitKey(0)

matches = flann.knnMatch(des1,des2,k=2)

good = [] #这里主要是匹配的更加精确,可以修改ratio值得到不同的效果,即下面的0.7
for m,n in matches:
    if m.distance < 0.7*n.distance:
        good.append([m])
img5 = cv2.drawMatchesKnn(img1,kp1,img2,kp2,good,None,flags=2)
cv2.imshow("SURF", img5)
cv2.waitKey(0)
cv2.destroyAllWindows()

 

Guess you like

Origin blog.csdn.net/weixin_40244676/article/details/104333632