Beyond self-attention: Deformable large-kernel attention for medical image segmentation

Summary

Medical image segmentation has achieved significant improvements in the adoption of transformer models that excel at grasping both deep contextual and global contextual information. However, the computational demands of these models increase proportionally to the square of the number of tokens, limiting their depth and resolution capabilities. Most current methods process D-volume image data layer-by-layer (called pseudo-3D), which loses critical inter-layer information and thus reduces the overall performance of the model. To address these challenges

Guess you like

Origin blog.csdn.net/m0_47867638/article/details/132993517