Backbone 之 Inception:纵横交错 (Pytorch实现及代码解析

  • Inception的参数量较少,适合处理大规模数据,尤其是对于计算资源有限的平台

为进一步降低参数量,Inception又增加了较多的1x1卷积块进行降维,改进为Inception v1版本,Inception v1共9个上述堆叠的模块,共有22层,在最后的Inception 模块中还是用了全局平均池化。同时为避免造成网络训练时带来的梯度消失的现象,在这里引入两个辅助的分类器,在第三个和第六个的Inception模块输出后执行Softmax并计算损失,在训练时和最后的损失一并回传。


Inception v1基础结构图:

img

Inception v1代码:

import torch

from torch import nn

import torch.nn.functional as F

###定义一个包含卷积和ReLU池化的基础卷积块

class BasicConv2d(nn.Module):

def init(self, in_channels, out_channels, kernel_size, padding=0):

super(BasicConv2d, self).init()

self.conv = nn.Conv2d(in_channels, out_channels, kernel_size, padding=padding)

def forward(self, x):

x = self.conv(x)

return F.relu(x, inplace=True)
《一线大厂Java面试题解析+后端开发学习笔记+最新架构讲解视频+实战项目源码讲义》无偿开源 威信搜索公众号【编程进阶路】
###Inceptionv1 类,初始时需要提供各个子模块的通道数大小

class Inceptionv1(nn.Module):

def init(self, in_dim, hid_1_1, hid_2_1, hid_2_3, hid_3_1, out_3_5, out_4_1):

super(Inceptionv1, self).init()

###四个子模块各自的网络定义

self.branch1x1 = BasicConv2d(in_dim, hid_1_1, 1)

self.branch3x3 = nn.Sequential(

BasicConv2d(in_dim, hid_2_1, 1),

BasicConv2d(hid_2_1, hid_2_3, 3, padding=1)

)

self.branch5x5 = nn.Sequential(

BasicConv2d(in_dim, hid_3_1, 1),

BasicConv2d(hid_3_1, out_3_5, 5, padding=2)

)

self.branch_pool = nn.Sequential(

nn.MaxPool2d(3, stride=1, padding=1),

BasicConv2d(in_dim, out_4_1, 1)

)

###定义前向传播

def forward(self, x):

b1 = self.branch1x1(x)

b2 = self.branch3x3(x)

b3 = self.branch5x5(x)

b4 = self.branch_pool(x)

###将这四个子模块沿着通道方向进行拼接

output = torch.cat((b1, b2, b3, b4), dim=1)

return output


Inception v2特点:

  • 增加BN层

  • 利用两个3*3来代替5x5卷积,减小了参数量,也提升网络的非线性能力

Inception v2结构示意图:

img

代码如下:

import torch

from torch import nn

import torch.nn.functional as F

class BasicConv2d(nn.Module):

猜你喜欢

转载自blog.csdn.net/m0_69523172/article/details/124567090