OpenCV3.0中有哪些视频背景/前景分割(背景建模/前景提取)算法的类,它们各自的算法原理、特点是什么,并附示例代码

关于OpenCV3中有哪些视频背景/前景分割(背景建模/前景提取)算法的类,汇总如下:
cv::Algorithm
  cv::BackgroundSubtractor
    cv::BackgroundSubtractorKNN
    cv::BackgroundSubtractorMOG2
      cv::cuda::BackgroundSubtractorMOG2
    cv::bgsegm::BackgroundSubtractorGMG
    cv::bgsegm::BackgroundSubtractorMOG
    cv::cuda::BackgroundSubtractorFGD
    cv::cuda::BackgroundSubtractorGMG
    cv::cuda::BackgroundSubtractorMOG
在这里插入图片描述
上面的汇总不仅显示了OpenCV3.0中有哪些视频背景/前景分割(背景建模/前景提取)算法的类,还显示了它们的继承、派生关系。
每一种具体的算法实现类都是继承于类cv::BackgroundSubtractor,而类cv::BackgroundSubtractor又继承于cv::Algorithm。

接下来分别介绍:

cv::BackgroundSubtractorKNN

cv::BackgroundSubtractorKNN利用K近邻(K-nearest neigbours)思想实现的背景建模。
其算法原理、成员函数介绍和示例代码见博文 https://blog.csdn.net/wenhao_ir/article/details/125007017

cv::bgsegm::BackgroundSubtractorMOG

cv::bgsegm::BackgroundSubtractorMOG是基于混合高斯模型的背景与前景分割算法。
其算法原理、成员函数介绍和示例代码见博文 https://blog.csdn.net/wenhao_ir/article/details/125010301

cv::cuda::BackgroundSubtractorMOG

cv::cuda::BackgroundSubtractorMOG是cv::bgsegm::BackgroundSubtractorMOG的CUDA实现。

cv::BackgroundSubtractorMOG2

cv::BackgroundSubtractorMOG2是混合高斯背景建模的改进版,该类实现了自适应高斯混合模型参数的更新,增强了复杂场景背景检测的性能。同时它为每个像素选择适当数量的高斯分布,它能更好的适应由于照明变化产生的不同场景。
其算法原理、成员函数介绍和示例代码见博文 https://blog.csdn.net/wenhao_ir/article/details/125017245

cv::cuda::BackgroundSubtractorMOG2

cv::cuda::BackgroundSubtractorMOG2是cv::BackgroundSubtractorMOG2的CUDA实现。

cv::bgsegm::BackgroundSubtractorGMG

cv::bgsegm::BackgroundSubtractorGMG是利用下面这篇论文的算法实现的:
Andrew B Godbehere, Akihiro Matsukawa, and Ken Goldberg. Visual tracking of human visitors under variable-lighting conditions for a responsive audio art installation. In American Control Conference (ACC), 2012, pages 4305–4312. IEEE, 2012.
论文的标题翻译过来为:在不同光照条件下对访客进行视觉跟踪,以便得到灵敏的音频艺术装置。后边半句话我们不明白是啥意思,什么叫“灵敏的音频艺术装置”,看了下面的论文摘要应该就懂了。
该论文的摘要链接:https://link.springer.com/chapter/10.1007/978-3-319-03904-6_8
摘要如下:

For a responsive audio art installation in a skylit atrium, we developed a single-camera statistical segmentation and tracking algorithm. The algorithm combines statistical background image estimation, per-pixel Bayesian classification, and an approximate solution to the multi-target tracking problem using a bank of Kalman filters and Gale-Shapley matching. A heuristic confidence model enables selective filtering of tracks based on dynamic data. Experiments suggest that our algorithm improves recall and (F_{2})-score over existing methods in OpenCV 2.1. We also find that feedback between the tracking and the segmentation systems improves recall and (F_{2})-score. The system operated effectively for 5–8 h per day for 4 months. Source code and sample data is open source and available in OpenCV.

翻译如下:
对于天际中庭中响应灵敏的音频艺术装置,我们开发了一种单摄像头统计分割和跟踪算法。该算法结合了统计背景图像估计、逐像素贝叶斯分类以及使用卡尔曼滤波器组和Gale-Shapley匹配近似解决多目标跟踪问题。启发式置信模型支持基于动态数据的轨迹选择性过滤。实验表明,与OpenCV 2.1中现有的方法相比,我们的算法提高了查全率和(F{2})得分。我们还发现,跟踪和分割系统之间的反馈提高了召回率和\(F{2}\)分数。该系统每天有效运行5-8小时,持续4个月。源代码和示例数据是开源的,可在OpenCV中获得。
重点提取:
①该算法结合了统计背景图像估计、逐像素贝叶斯分类以及使用卡尔曼滤波器组和Gale-Shapley匹配近似解决多目标跟踪问题。
②启发式置信模型支持基于动态数据的轨迹选择性过滤。
关于什么叫召回

下面这两段话是从别的资料摘录的对此算法的介绍:

该算法结合了静态背景图像估计和每像素贝叶斯分割。
它使用前面很少的图像(默认为前 120帧)进行背景建模。使用了概率前景分割算法(通过贝叶斯推理识别可能的前景对象),这是一种自适应的估计。新观察到的对象比旧的对象具有更高的权重,以适应光照变化。一些形态学过滤操作,如开运算闭运算等,被用来除去不需要的噪音。在前几帧图像中你会得到一个黑色窗口。该算法对结果进行形态学开运算对与去除噪声很有帮助。

该方法基于数学统计背景模型估计,首先统计RGB颜色空间下直方图信息并进行量化,然后根据T帧图像训练初始数据像素背景,再利用贝叶斯规则来计算一个像素被分类为前景的可能性,最后通过背景模型特征更新参数生成前景目标。

其成员函数介绍和示例代码见博文 https://blog.csdn.net/wenhao_ir/article/details/125069369

cv::cuda::BackgroundSubtractorGMG

cv::cuda::BackgroundSubtractorGMG是cv::bgsegm::BackgroundSubtractorGMG的cuda实现。cv::bgsegm::BackgroundSubtractorGMG刚才已经讲过了。

cv::cuda::BackgroundSubtractorFGD

cv::cuda::BackgroundSubtractorFGD是基于cuda的背景与前景分割算法,它利用下面这篇论文的算法实现:
Liyuan Li, Weimin Huang, Irene YH Gu, and Qi Tian. Foreground object detection from videos containing complex background. In Proceedings of the eleventh ACM international conference on Multimedia, pages 2–10. ACM, 2003.
论文的标题翻译过来为:基于复杂背景视频的前景目标检测。
论文的摘要如下:

This paper proposes a novel method for detection and segmentation of foreground objects from a video which contains both stationary and moving background objects and undergoes both gradual and sudden “once-off” changes. A Bayes decision rule for classification of background and foreground from selected feature vectors is formulated. Under this rule, different types of background objects will be classified from foreground objects by choosing a proper feature vector. The stationary background object is described by the color feature, and the moving background object is represented by the color co-occurrence feature. Foreground objects are extracted by fusing the classification results from both stationary and moving pixels. Learning strategies for the gradual and sudden “once-off” background changes are proposed to adapt to various changes in background through the video. The convergence of the learning process is proved and a formula to select a proper learning rate is also derived. Experiments have shown promising results in extracting foreground objects from many complex backgrounds including wavering tree branches, flickering screens and water surfaces, moving escalators, opening and closing doors, switching lights and shadows of moving objects.

该摘要和论文的链接:https://dl.acm.org/doi/10.1145/957013.957017
翻译如下:
该文提出了一种新的方法来检测和分割视频中的前景对象,该视频既包含静止的背景对象,也包含移动的背景对象,并且经历了渐进和突然的“一次性”变化。提出了一种从所选特征向量中分类背景和前景的贝叶斯决策规则。在此规则下,通过选择合适的特征向量,将不同类型的背景对象与前景对象进行分类。静止背景对象用颜色特征描述,运动背景对象用颜色共生特征表示。通过融合静止和运动像素的分类结果,提取前景对象。为了适应视频中背景的各种变化,提出了渐进式和突然式“一次性”背景变化的学习策略。证明了学习过程的收敛性,并导出了选择合适学习速率的公式。实验表明,在从摇曳的树枝、闪烁的屏幕和水面、移动的扶梯、打开和关闭门、切换移动对象的灯光和阴影等复杂背景中提取前景对象方面取得了令人满意的结果。
重点提取:
①静止背景对象用颜色特征描述,运动背景对象用颜色共生特征表示。
②提出了渐进式和突然式“一次性”背景变化的学习策略。
③在从摇曳的树枝、闪烁的屏幕和水面、移动的扶梯、打开和关闭门、切换移动对象的灯光和阴影等复杂背景中提取前景对象方面取得了令人满意的结果。

总结一下,截止到OpenCV3.0,共实现了五种背景/前景分割(背景建模/前景提取)算法,它们的名称分别为KNN、MOG、MOG2、GMG和FGD。

延伸阅读:
OpenCV4中有哪些视频背景/前景分割(背景建模/前景提取)算法的类,它们各自的算法原理和特点是什么。

猜你喜欢

转载自blog.csdn.net/wenhao_ir/article/details/124991529