CVPR文章摘要中文翻译集

CVPR文章摘要中文翻译

一、SRN-Side-output Residual Network for Object Sysmetry Detection in the Wild 2017.CVPR

用于野外目标对称性检测的边输出残差网络

  • 本文提出了一种新的基准和端到端的深度学习方法,建立了复杂背景下物体对称性检测的基准,为野外对称性检测开辟了一个有前途的方向。这个新的基准名为Sym-PASCAL,跨越了许多挑战,包括物体多样性,多物体,部分不可见性和大量复杂的背景,这些背景远超出了已存在的数据集。提出的对称性检测方法称为边输出残差网络(SRN),利用输出残差单元(RUs)来拟合目标对称性的真值与残差输出单元的误差。SRN利用由浅到深的方式来叠加残差单元,利用多尺度误差的"流动"来解决有限层的复杂输出拟合问题,抑制复杂背景,而且有效地匹配不同尺度的目标对称性。实验结果验证了基准和其与真实世界图像相关的挑战性,以及我们的对称性检测方法的最高水准。
  • In this paper, we establish a baseline for object symmetry detection in complex backgrounds by presenting a new benchmark and an end-to-end deep learning approach, opening up a promising direction for symmetry detection in the wild. The new benchmark, named Sym-PASCAL, spans challenges including object diversity, multi-objects, part-invisibility, and various complex backgrounds that are far beyond those in existing datasets. The proposed symmetry detection approach, named Side-output Residual Network (SRN), leverages output Residual Units (RUs) to fit the errors between the object symmetry ground-truth and the outputs of RUs. By stacking RUs in a deep-to-shallow manner, SRN exploits the ‘flow’ of errors among multiple scales to ease the problems of fitting com-plex outputs with limited layers, suppressing the complex backgrounds, and effectively matching object symmetry of different scales. Experimental results validate both the benchmark and its challenging aspects related to real-world images, and the state-of-the-art performance of our symmetry detection approach. The benchmark and the code for SRN are publicly available at https://github.com/KevinKecc/SRN .

二、Self-learning Scine-specific Pedestrain Detectors using a Progressive Latent Model. 2017.CVPR

基于渐进隐模型的自学习场景特定行人检测器

  • 本文提出了一种无需人工标注的自学习方法来解决特定场景下的行人检测问题。自学习方法部署了对象发现、对象实施和标签传播的渐进步骤。在学习过程中,每一帧的目标位置被视为潜在变量,并用渐进式模型(PLM)求解。与传统的潜模型相比,提出的PLM模型融合了空间正则化项来减少目标推荐中的歧义,并强制目标定位。同时采用基于图的标签传播来发现相邻帧的较难实例。由于凸目标函数的差异性,利用凹凸规划可以有效地优化PLM,从而保证自学习的稳定性。大量实验证明,即使不加标注,所提出的自学习方法也优于弱监督学习方法,同时取得了与转移学习和完全监督监督学习相当的性能。
  • In this paper, a self-learning approach is proposed towards solving scene-specific pedestrian detection prob- lem without any human’ annotation involved. The self- learning approach is deployed as progressive steps of object discovery, object enforcement, and label propagation. In the learning procedure, object locations in each frame are treated as latent variables that are solved with a progressive latent model (PLM). Compared with conventional latent models, the proposed PLM incorporates a spatial regu- larization term to reduce ambiguities in object proposals and to enforce object localization, and also a graph-based label propagation to discover harder instances in adjacent frames. With the difference of convex (DC) objective functions, PLM can be efficiently optimized with a concave- convex programming and thus guaranteeing the stability of self-learning. Extensive experiments demonstrate that even without annotation the proposed self-learning approach outperforms weakly supervised learning approaches, while achieving comparable performance with transfer learning and fully supervised approaches.

三、Texture Classification in Extreme Scale Variations using GANet 2019.CVPR

基于GANet的极端尺度变化纹理分类

  • 纹理识别的研究通常集中在识别光照、旋转、视角和尺度变化等类内的纹理变化。相比之下,在现实世界的应用中,比例的变化会对纹理外观产生极大的影响,以至于从一个纹理类别完全改变到另一个纹理类别。因此,由比例变化引起的纹理变化是最难处理的。在此项工作中,我们进行了首次对尺度变化较大的纹理分类工作。为了解决这个问题,我们首次提出并减少基于主要纹理模式的尺度重建。基于此问题带来的挑战,我们提出了一个新的GANet网络,在网络训练的过程中,我们使用遗传算法去改变隐藏层中的滤波器,以促进更多提供有效信息的纹理模式的雪。最后,我们采用卷积神经网络滤波器组Fisher向量池FV-CNN特征编码器进行全局纹理表示。由于大多数标准纹理数据库中不一定存在极端尺度变化,为了支持纹理理解中提出的极端尺度方面,我们正在开发一个新的数据集,极端尺度变化纹理数据集(ESVat)来测试我们框架的性能。结果表明,该框架在ESVaT上的性能明显优于gold-standard纹理特征的10%以上。我们还测试后了我们提出的方法在KTHTIPS2b、OS数据集以及Forrest综合派生的进一步数据集上的性能,显示出与现有技术相比的优越性能。
  • Research in texture recognition often concentrates on recognizing textures with intraclass variations such as il- lumination, rotation, viewpoint and small scale changes. In contrast, in real-world applications a change in scale can have a dramatic impact on texture appearance, to the point of changing completely from one texture category to another. As a result, texture variations due to changes in scale are amongst the hardest to handle. In this work we conduct the first study of classifying textures with extreme variations in scale. To address this issue, we first propose and then reduce scale proposals on the basis of dominant texture patterns. Motivated by the challenges posed by this problem, we propose a new GANet network where we use a Genetic Algorithm to change the filters in the hidden layers during network training, in order to promote the learning of more informative semantic texture patterns. Finally, we adopt a FV- CNN (Fisher Vector pooling of a Convolutional Neural Network filter bank) feature encoder for global texture representation. Because extreme scale variations are not necessarily present in most standard texture databases, to support the proposed extreme-scale aspects of texture understanding we are developing a new dataset, the Extreme Scale Variation Textures (ESVaT), to test the performance of our framework. It is demonstrated that the proposed framework significantly outperforms gold-standard texture features by more than 10% on ESVaT. We also test the performance of our proposed approach on the KTHTIPS2b and OS datasets and a further dataset synthetically derived from Forrest, showing superior performance compared to the state of the art.

四、Saliency Intergration: An Arbitrator Model 2018.TMM

一种显著集成的仲裁模型

  • 显著性集成在统一多个显著性模型中的显著性映射中引起了广泛的关注。以前的离线集成方法,通常面临两大挑战。1.如果大多数的候选显著性模型误判了图像上的显著性,则集成结果将会被劣质候选模型严重影响。2.对基本显著性标签的不了解给评估每个候选模型带来了困难。为了解决这些问题,本文提出了一种显著集成的仲裁模型。首先,我们将多个显著模型的一致性和外部知识融合到基准图中。其次,我们对不带基本真值标签的显著性模型的专业知识评价方法方面 提出了两种不同的在线模型专业知识评估方法。最后,我们推导了一个贝叶斯集成框架来协调不同专业知识的显著性模型和基准图。为了广泛地评价所提出的AM模型,我们在四个数据集的不同组合上测试了27个最新的显著性模型。评估结果表明,与现有的最新集成方法相比,无论选择何种显著性模型,AM模型都能显著地提高系统性能。
  • Saliency integration has attracted much attention on unifying saliency maps from multiple saliency models. Previous offline integration methods usually face two challenges: 1. if most of the candidate saliency models misjudge the saliency on an image, the integration result will lean heavily on those inferior candidate models; 2. an unawareness of the ground truth saliency labels brings difficulty in estimating the expertise of each candi- date model. To address these problems, in this paper, we propose an arbitrator model (AM) for saliency integration. Firstly, we incorporate the consensus of multiple saliency models and the external knowledge into a reference map to effectively rectify the misleading by candidate models. Secondly, our quest for ways of estimating the expertise of the saliency models without ground truth labels gives rise to two distinct online model-expertise estimation methods. Finally, we derive a Bayesian integration framework to reconcile the saliency models of varying expertise and the reference map. To extensively evaluate the proposed AM model, we test twenty-seven state-of-the-art saliency models, covering both traditional and deep learning ones, on various combinations over four datasets. The evaluation results show that the AM model improves the performance substantially compared to the existing state-of-the-art integration methods, regardless of the chosen candidate saliency models.

五、Hierarchical Contour Closure based Holistic Salient Object Detection 2017.TIM

基于层次轮廓闭合的整体显著性目标检测

  • 现有的显著目标检测方法,大多通过对比计算像素、斑块或超像素的显著性。这种基于细粒度对比度的显著目标检测方法,在图像复杂的情况下,存在显著性衰减和背景显著性高估的问题。为了更好的计算复杂图像的显著性,该方法充分利用了闭合完备性和闭合显著性两个显著性线索。前者突出了以完全闭合的外轮廓为边界的整体均匀区域,后者突出了以平均高度可靠的外轮廓为边界的整体均匀区域。因此,我们提出了两种计算方法来计算分层分割空间中的显著性映射。最后,我们提出了一个框架,将两个显著图结合起来,得到最终的显著图。在三个公开数据集上的结果表明,甚至每张单一的显著图都能达到最先进的性能。此外,我们的框架结合了两个显著图,其性能优于现有技术。另外,我们还证明了提出的框架可以很容易地拓展现有方法,并进一步提高他们的性能。
  • Most existing salient object detection methods com- pute the saliency for pixels, patches or superpixels by contrast. Such fine-grained contrast based salient object detection methods are stuck with saliency attenuation of the salient object and saliency overestimation of the background when the image is complicated. To better compute the saliency for complicated images, we propose a hierarchical contour closure based holistic salient object detection method, in which two saliency cues, i.e., closure completeness and closure reliability are thoroughly exploited. The former pops out the holistic homogeneous regions bounded by completely closed outer contours, and the latter highlights the holistic homogeneous regions bounded by averagely highly reliable outer contours. Accordingly, we propose two computational schemes to compute the corresponding saliency maps in a hierarchical segmentation space. Finally, we propose a framework to combine the two saliency maps, obtaining the final saliency map. Experimental results on three publicly available datasets show that even each single saliency map is able to reach the state-of-the-art performance. Furthermore, our framework which combines two saliency maps outperforms the state of the arts. Additionally, we show that the proposed framework can be easily used to extend existing methods and further improve their performances substantially.
发布了6 篇原创文章 · 获赞 6 · 访问量 457

猜你喜欢

转载自blog.csdn.net/qq_40092110/article/details/104706511
今日推荐