Attention Mechanism Learning Record

ECANet channel attention module

First look at its model structure:
insert image description here
code implementation:
insert image description here

import torch
from torch import nn
class eca_layer(nn.Module):
    def __init__(self, channel, k_size=3):
        super(eca_layer, self).__init__()
        self.avg_pool = nn.AdaptiveAvgPool2d(1)
        self.conv = nn.Conv1d(1, 1, kernel_size=k_size, padding=(k_size - 1) // 2, bias=False)
        self.sigmoid = nn.Sigmoid()

    def forward(self, x):
        y = self.avg_pool(x)
        y = self.conv(y.squeeze(-1).transpose(-1, -2)).transpose(-1, -2).unsqueeze(-1)
        y = self.sigmoid(y)
        return x * y.expand_as(x)


model = eca_layer(256)
data = torch.randn((2, 3, 224, 224))
feature = model(data)
print(model)

Firstly, the data is passed in: torch.Size([2, 3, 224, 224])
then through the average pooling y = self.avg_pool(x), it becomes: torch.Size([2, 3, 1, 1])
the dimensional information here is disassembled and viewed:
the squeeze method reduces the dimension, the unsqueeze method increases the dimension, and the transpose method converts the dimension
torch.Size([2, 1, 3]) The dimension numbers are 0, 1, 2 from front to back, and -1, -2, -3 from back to front

y = self.avg_pool(x)#torch.Size([2, 3, 1, 1])
y=y.squeeze(-1) #torch.Size([2, 3, 1])
y=y.transpose(-1, -2)#torch.Size([2, 1, 3])
y = self.conv(y)#torch.Size([2, 1, 3])
y=y.transpose(-1, -2)#torch.Size([2, 3, 1])
y=y.unsqueeze(-1)#torch.Size([2, 3, 1, 1])
y = self.sigmoid(y)#torch.Size([2, 3, 1, 1])
return x * y.expand_as(x)#torch.Size([2, 3, 224, 224])

Finally, the dimension information obtained by convolution: torch.Size([2, 3, 1, 1])
at this time, the data is:

tensor([[[[-0.0002]],
         [[ 0.0001]],
         [[ 0.0010]]],
        [[[ 0.0009]],
         [[ 0.0028]],
         [[-0.0019]]]])

Then perform sigmoid for normalization:

tensor([[[[0.5000]],
         [[0.5000]],
         [[0.5002]]],
        [[[0.5002]],
         [[0.5007]],
         [[0.4995]]]])

y.expand_as(x)Return to the original size by performing calculations: torch.Size([2, 3, 224, 224])
channel is not used here.

Guess you like

Origin blog.csdn.net/pengxiang1998/article/details/131001453