M3Net

M 3 Net: Multi-scale Multi-path Multi-modal Fusion Network and Example Application to RGB-D Salient Object Detection

m3 Net:多尺度 多路径 多模态 融合网络及其在RGB-D显著目标检测中的应用实例

Fusing RGB and depth data is compelling in boosting performance for various robotic and computer vision tasks.
Typically, the streams of RGB and depth information are merged into a single fusion point in an early or late stage to generate combined features or decisions.
The single fusion point also means single fusion path, which is congested and inflexible to fuse all the information from different modalities.
As a result,the fusion process is brute-force and consequently insufficient.
To address this problem, we propose a multi-scale multi-path multi-modal fusion network (M 3 Net), in which the fusion path is scattered to diversify the contributions of each modality from global and local perspectives.
Specially, the CNN streams of each modality are fused with a global understanding path and meanwhile a local capturing path.
By filtering and regulating information flow in a multi-path way, the M 3 Net is equipped with more adaptive and flexible fusion mechanism, thus easing the gradient-based learning process, improving the directness
and transparency of the fusion process and simultaneously facilitating the fusion process with multi-scale perspectives.
Comprehensive experiments demonstrate the significant and consistent improvements of the proposed approach over state-of-the-art methods.

融合RGB和深度数据在提高各种机器人和计算机视觉任务的性能方面是引人注目的

通常情况下,RGB和深度信息流在早期或后期被合并到一个单一的融合点,以生成组合特征或决策。

单一的融合点也意味着单一的融合路径,这种融合路径拥挤且不能融合来自不同模式的所有信息。

因此,融合过程是蛮力的,因此是不够的。

为了解决这一问题,我们提出了一种多尺度多路径多模态融合网络(m3net),其中融合路径是分散的,从全球和局部的角度多样化每个模态的贡献。

特别地,每个模态的CNN流都融合了一个全局的理解路径和一个局部的捕获路径。

m3网通过多路径的方式对信息流进行过滤和调节,具有更自适应、更灵活的融合机制,缓解了基于梯度的学习过程,提高了直接性

以及融合过程的透明性,同时以多尺度视角促进融合过程。

全面的实验证明,与最先进的方法相比,所提出的方法有显著和一致的改进。

猜你喜欢

转载自blog.csdn.net/zjc910997316/article/details/112991998