Pytorch: Neural Networks (三)

这个算是给后续做项目做个预热环节(≧▽≦)/啦啦啦,已经开始写网络层,loss,优化函数这些都有,写起来也不是很费劲,主要就是接口使用上不太熟悉,代码还是很好理解的

官网上也给出了对应的一般训练网络的步骤:

  1. 定义网络结构
  2. 迭代你的数据集
  3. 让网络来处理你的输入
  4. 计算损失
  5. 计算梯度进行方向传播
  6. 根据对应的优化算法进行参数更新(例子给的就是简单的梯度下降更新规则)
#导入对应依赖包
import torch
import torch.nn as nn
import torch.nn.functional as F

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        # 1 input image channel, 6 output channels, 3x3 square convolution
        # 输入通道数,输出通道数, 卷积核大小
        self.conv1 = nn.Conv2d(1, 6, 3)
        self.conv2 = nn.Conv2d(6, 16, 3)
        #这个就是全连接网络,输入输出维度定义
        self.fc1 = nn.Linear(16*6*6, 120)
        self.fc2 = nn.Linear(120, 84)
        self.fc3 = nn.Linear(84, 10)
        
    #前向传播部分,加了ReLU和最大池化操作
    def forward(self, x):
        x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
        x = F.max_pool2d(F.relu(self.conv2(x)), (2))
        x = x.view(-1, self.num_flat_features(x))
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x
    # 这个就是定义了一个函数,把特征拉成向量,用来做后续的全连接操作
    def num_flat_features(self, x):
        size = x.size()[1:]
        num_features = 1
        for s in size:
            num_features *= s
        return num_features
    
net = Net()
print(net)
Net(
  (conv1): Conv2d(1, 6, kernel_size=(3, 3), stride=(1, 1))
  (conv2): Conv2d(6, 16, kernel_size=(3, 3), stride=(1, 1))
  (fc1): Linear(in_features=576, out_features=120, bias=True)
  (fc2): Linear(in_features=120, out_features=84, bias=True)
  (fc3): Linear(in_features=84, out_features=10, bias=True)
)
# 参数列表,LZ稍微增加了点,输出了这个网络的总的参数
params = list(net.parameters())
print(len(params))
print(params[0].size())
num_total = 0
for s in range(0, len(params)):
    num_par = 1
    for num in params[s].size():
        num_par *= num
    num_total += num_par
print(num_total)
10
torch.Size([6, 1, 3, 3])
81194
#随机初始化对应维度作为输入
#注意输入顺序 nSamples x nChannels x Height x Width
input = torch.randn(1, 1, 32, 32)
out = net(input)
print(out)
tensor([[-0.0035,  0.0065, -0.0847, -0.0913, -0.0093,  0.0638, -0.1184,  0.0701,
          0.1883,  0.0905]], grad_fn=<AddmmBackward>)
#要手动将梯度置零
net.zero_grad()
out.backward(torch.randn(1, 10))
output = net(input)
target = torch.randn(10)
target = target.view(1, -1)
#制定loss计算规则,这里就是均方误差
criterion = nn.MSELoss()
# 计算loss
loss = criterion(output, target)
print(loss)
tensor(0.6834, grad_fn=<MseLossBackward>)
print(loss.grad_fn)
print(loss.grad_fn.next_functions[0][0])
print(loss.grad_fn.next_functions[0][0].next_functions[0][0])
<MseLossBackward object at 0x7fe3ec0bf4a8>
<AddmmBackward object at 0x7fe3ec0bf9e8>
<AccumulateGrad object at 0x7fe3ec0bf4a8>
net.zero_grad()
print('conv1.bias.grad before backward')
print(net.conv1.bias.grad)

#进行反向传播,
loss.backward()

#对应参数得到梯度
print("conv1.bias.grad after backward")
print(net.conv1.bias.grad)

conv1.bias.grad before backward
tensor([0., 0., 0., 0., 0., 0.])
conv1.bias.grad after backward
tensor([-0.0009,  0.0055,  0.0078, -0.0062,  0.0008,  0.0101])
#学习率更新后,跟新参数
learning_rate = 0.01
for f in net.parameters():
    #print(f.data)
    f.data.sub_(f.grad.data*learning_rate)
    #print(f.data)
#导入优化函数依赖包,使用随机梯度下降
import torch.optim as optim
optimizer = optim.SGD(net.parameters(), lr = 0.01)

optimizer.zero_grad()
output = net(input)
loss = criterion(output, target)
loss.backward()
optimizer.step()

好像也还行,能理解,继续下面的学习。。。

参考地址:
https://pytorch.org/tutorials/beginner/blitz/neural_networks_tutorial.html#sphx-glr-beginner-blitz-neural-networks-tutorial-py

发布了300 篇原创文章 · 获赞 203 · 访问量 59万+

猜你喜欢

转载自blog.csdn.net/Felaim/article/details/102858283
今日推荐