将两张图片的不同标记出来

  1. 差异过于细微,阈值设置不当:您的差异可能是颜色或位置的微小变化,当前的阈值和处理方式可能不足以检测到这些细微差异。

  2. 图像配准不够精确:由于两张图片内容高度相似,特征点匹配可能存在误差,导致图像对齐不准确,影响差异检测。

  3. 灰度处理损失了颜色信息:如果差异体现在颜色上,转换为灰度图后,颜色变化可能被忽略。

  4. 形态学操作和面积过滤参数不合适:形态学处理和面积过滤的参数可能导致小的差异区域被过滤掉。


解决方案

1. 降低阈值,提高敏感度
  • 降低阈值:在阈值处理步骤中,将阈值从30降低到更小的值,如510,使得对细微差异更加敏感。

    _, thresh = cv2.threshold(diff, 5, 255, cv2.THRESH_BINARY)
    
2. 使用彩色图像进行差异检测
  • 直接计算彩色图像的差异:由于差异可能体现在颜色上,使用彩色图像的差异计算会更有效。

    # 计算彩色图像的差异
    diff_color = cv2.absdiff(img1_aligned, img2_color)
    # 转换为灰度图
    diff_gray = cv2.cvtColor(diff_color, cv2.COLOR_BGR2GRAY)
    # 阈值处理
    _, thresh = cv2.threshold(diff_gray, 5, 255, cv2.THRESH_BINARY)
    
3. 使用结构相似性(SSIM)
  • SSIM对细微差异更敏感:使用SSIM可以检测到亮度、对比度和结构上的微小变化。

    from skimage.metrics import structural_similarity as ssim
    
    # 计算SSIM
    score, diff = ssim(img1_aligned_gray, img2_gray, full=True)
    diff = (diff * 255).astype("uint8")
    diff = cv2.bitwise_not(diff)  # 反转图像
    # 阈值处理
    _, thresh = cv2.threshold(diff, 5, 255, cv2.THRESH_BINARY)
    

    注意:需要安装scikit-image库:

    pip install scikit-image
    
4. 调整形态学操作和面积阈值
  • 形态学操作:调整迭代次数和核大小,以保留更多细节。

    kernel = np.ones((3, 3), np.uint8)
    thresh = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel, iterations=1)
    thresh = cv2.dilate(thresh, kernel, iterations=1)
    
  • 降低面积过滤阈值:减少cv2.contourArea()的阈值,确保小的差异区域也能被标记。

    if area > 5:  # 从50降低到5
    
5. 验证图像配准效果
  • 可视化匹配的特征点:检查特征点匹配是否准确。

    # 绘制前50个匹配点
    img_matches = cv2.drawMatches(img1_color, keypoints1, img2_color, keypoints2, good_matches[:50], None, flags=cv2.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)
    cv2.imshow('Matches', img_matches)
    cv2.waitKey(0)
    cv2.destroyAllWindows()
    
  • 尝试其他特征检测器:如SIFTSURF,但需注意它们的许可证要求。


修改后的完整代码

import cv2
import numpy as np
from skimage.metrics import structural_similarity as ssim

# 读取两张图片
img1_color = cv2.imread('find_difference_image1.png')  # 待配准的原始图像1
img2_color = cv2.imread('find_difference_image2.png')  # 基准的原始图像2

# 检查图片是否成功读取
if img1_color is None or img2_color is None:
    print("错误:无法读取图片。请检查文件路径。")
    exit()

# 将图片转换为灰度图
img1_gray = cv2.cvtColor(img1_color, cv2.COLOR_BGR2GRAY)
img2_gray = cv2.cvtColor(img2_color, cv2.COLOR_BGR2GRAY)

# 创建ORB特征检测器
orb = cv2.ORB_create(10000)  # 增加特征点数量

# 检测并计算特征点和描述子
keypoints1, descriptors1 = orb.detectAndCompute(img1_gray, None)
keypoints2, descriptors2 = orb.detectAndCompute(img2_gray, None)

# 创建BFMatcher对象
bf = cv2.BFMatcher(cv2.NORM_HAMMING)

# KNN匹配,k=2
matches = bf.knnMatch(descriptors1, descriptors2, k=2)

# 过滤匹配结果,应用比值测试(Lowe's ratio test)
good_matches = []
for m, n in matches:
    if m.distance < 0.75 * n.distance:
        good_matches.append(m)

# 检查是否有足够的匹配点
if len(good_matches) > 10:
    # 提取匹配的关键点坐标
    src_pts = np.float32([keypoints1[m.queryIdx].pt for m in good_matches]).reshape(-1, 1, 2)
    dst_pts = np.float32([keypoints2[m.trainIdx].pt for m in good_matches]).reshape(-1, 1, 2)

    # 计算Homography矩阵
    M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC, 5.0)

    # 将img1变换到img2的坐标系
    h, w = img2_gray.shape
    img1_aligned = cv2.warpPerspective(img1_color, M, (w, h))

    # 使用SSIM计算差异
    img1_aligned_gray = cv2.cvtColor(img1_aligned, cv2.COLOR_BGR2GRAY)
    score, diff = ssim(img1_aligned_gray, img2_gray, full=True)
    diff = (diff * 255).astype("uint8")
    diff = cv2.bitwise_not(diff)  # 反转图像

    # 阈值处理
    _, thresh = cv2.threshold(diff, 5, 255, cv2.THRESH_BINARY)

    # 使用形态学操作去除噪声和小的差异
    kernel = np.ones((3, 3), np.uint8)
    thresh = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel, iterations=1)
    thresh = cv2.dilate(thresh, kernel, iterations=1)

    # 查找差异区域的轮廓
    contours, _ = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

    # 计算逆变换矩阵
    M_inv = np.linalg.inv(M)

    # 在原始的img1_color上绘制差异区域
    for contour in contours:
        area = cv2.contourArea(contour)
        if area > 5:
            # 将轮廓坐标转换为浮点型
            contour = contour.astype(np.float32)
            # 使用逆变换矩阵将坐标变换回img1的坐标系
            contour_transformed = cv2.perspectiveTransform(contour, M_inv)
            # 将坐标转换为整数
            contour_transformed = contour_transformed.astype(np.int32)
            # 绘制轮廓
            cv2.drawContours(img1_color, [contour_transformed], -1, (0, 0, 255), 2)

    # 在原始图像2上绘制差异区域
    for contour in contours:
        area = cv2.contourArea(contour)
        if area > 5:
            cv2.drawContours(img2_color, [contour], -1, (0, 0, 255), 2)

    # 调整图片大小以便显示
    img1_original_resized = cv2.resize(cv2.imread('find_difference_image1.png'), (400, 300))
    img2_original_resized = cv2.resize(cv2.imread('find_difference_image2.png'), (400, 300))
    img1_diff_resized = cv2.resize(img1_color, (400, 300))
    img2_diff_resized = cv2.resize(img2_color, (400, 300))

    # 将四张图片拼接成一张图片
    top_row = np.hstack((img1_original_resized, img2_original_resized))
    bottom_row = np.hstack((img1_diff_resized, img2_diff_resized))
    combined_image = np.vstack((top_row, bottom_row))

    # 显示组合后的图片
    cv2.imshow('Original and Difference Images', combined_image)
    cv2.waitKey(0)
    cv2.destroyAllWindows()
else:
    print("匹配点不足,无法计算Homography矩阵。")
    exit()

进一步的建议

  • 检查配准质量:使用cv2.drawMatches()可视化特征点匹配,确保配准准确。

  • 调整SSIM参数ssim()函数的参数可以调整,如gaussian_weightssigma等,以提高对细微差异的检测能力。

  • 尝试其他差异检测方法:如计算颜色直方图的差异,或者使用更高级的图像差异算法。

猜你喜欢

转载自blog.csdn.net/sunyuhua_keyboard/article/details/143083471