深度学习参数更新--自适应的学习率(Adative Learning Rate)

以下介绍深度学习的主要几种参数更新的优化方法
1.Adagrad
使得学习率 η i = 0 t ( g i 2 ) 可以自适应,对于出现频率较低参数采用较大的α更新;相反,对于出现频率较高的参数采用较小的α更新。因此,Adagrad非常适合处理稀疏数据。

w t + 1 w t η i = 0 t ( g i 2 ) + ε g t

这里的 ϵϵ 是为了数值稳定性而加上的,因为有可能 s 的值为 0,那么 0 出现在分母就会出现无穷大的情况,通常 ε 10 10 ,这样不同的参数由于梯度不同,得到的学习率也就不同,从而实现了自适应的学习率。
核心代码:

def sgd_adagrad(parameters, sqrs, lr):
    eps = 1e-10
    for param, sqr in zip(parameters, sqrs):
        sqr[:] = sqr + param.grad.data ** 2
        div = lr / torch.sqrt(sqr + eps) * param.grad.data
        param.data = param.data - div

以下栗子为采用Adagrad参数更新方法,利用pytorch实现简单的三层神经网络进行MNIST手写数据集的识别

import numpy as np
import torch
from torchvision.datasets import MNIST # 导入 pytorch 内置的 mnist 数据
from torch.utils.data import DataLoader
from torch import nn
from torch.autograd import Variable
import time
import matplotlib.pyplot as plt
%matplotlib inline

def data_tf(x):
    x = np.array(x, dtype='float32') / 255
    x = (x - 0.5) / 0.5 # 标准化,这个技巧之后会讲到
    x = x.reshape((-1,)) # 拉平
    x = torch.from_numpy(x)
    return x

train_set = MNIST('./data', train=True, transform=data_tf, download=True) # 载入数据集,申明定义的数据变换
test_set = MNIST('./data', train=False, transform=data_tf, download=True)

# 定义 loss 函数
criterion = nn.CrossEntropyLoss()
train_data = DataLoader(train_set, batch_size=64, shuffle=True)
# 使用 Sequential 定义 3 层神经网络
net = nn.Sequential(
    nn.Linear(784, 200),
    nn.ReLU(),
    nn.Linear(200, 10),
)

# 初始化梯度平方项
sqrs = []
for param in net.parameters():
    sqrs.append(torch.zeros_like(param.data))

# 开始训练
losses = []
idx = 0
start = time.time() # 记时开始
for e in range(5):
    train_loss = 0
    for im, label in train_data:
        im = Variable(im)
        label = Variable(label)
        # 前向传播
        out = net(im)
        loss = criterion(out, label)
        # 反向传播
        net.zero_grad()
        loss.backward()
        sgd_adagrad(net.parameters(), sqrs, 1e-2) # 学习率设为 0.01
        # 记录误差
        train_loss += loss.data[0]
        if idx % 30 == 0:
            losses.append(loss.data[0])
        idx += 1
    print('epoch: {}, Train Loss: {:.6f}'
          .format(e, train_loss / len(train_data)))
end = time.time() # 计时结束
print('使用时间: {:.5f} s'.format(end - start))epoch: 0, Train Loss: 0.406752
epoch: 1, Train Loss: 0.248588
epoch: 2, Train Loss: 0.211789
epoch: 3, Train Loss: 0.188928
epoch: 4, Train Loss: 0.172839
使用时间: 54.70610 s

运行的result如下所示:

epoch: 0, Train Loss: 0.406752
epoch: 1, Train Loss: 0.248588
epoch: 2, Train Loss: 0.211789
epoch: 3, Train Loss: 0.188928
epoch: 4, Train Loss: 0.172839
使用时间: 54.70610 s

以下为训练过程中的loss

x_axis = np.linspace(0, 5, len(losses), endpoint=True)
plt.semilogy(x_axis, losses, label='adagrad')
plt.legend(loc='best')

这里写图片描述
当然 pytorch 也内置了 adagrad 的优化算法,只需要调用 torch.optim.Adagrad() 就可以了,下面是例子

train_data = DataLoader(train_set, batch_size=64, shuffle=True)
# 使用 Sequential 定义 3 层神经网络
net = nn.Sequential(
    nn.Linear(784, 200),
    nn.ReLU(),
    nn.Linear(200, 10),
)

optimizer = torch.optim.Adagrad(net.parameters(), lr=1e-2)
# 开始训练

start = time.time() # 记时开始
for e in range(5):
    train_loss = 0
    for im, label in train_data:
        im = Variable(im)
        label = Variable(label)
        # 前向传播
        out = net(im)
        loss = criterion(out, label)
        # 反向传播
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        # 记录误差
        train_loss += loss.data[0]
    print('epoch: {}, Train Loss: {:.6f}'
          .format(e, train_loss / len(train_data)))
end = time.time() # 计时结束
print('使用时间: {:.5f} s'.format(end - start))

2.RMSProp
Adagrad会累加之前所有的梯度平方,而RMSprop这里 α 是一个移动平均的系数,也是因为这个系数,导致了 RMSProp 和 Adagrad 不同的地方,这个系数使得 RMSProp 更新到后期累加的梯度平方较小,从而保证 σ 不会太大,也就使得模型后期依然能够找到比较优的结果

w 1 w 0 η σ 0 + ε g 0 , σ 0 = g 0 w 2 w 1 η σ 1 + ε g 1 , σ 1 = α ( σ 0 ) 2 + ( 1 α ) ( g 1 ) 2 w 3 w 2 η σ 2 + ε g 2 , σ 2 = α ( σ 1 ) 2 + ( 1 α ) ( g 2 ) 2 . . . w t w t 1 η σ t 1 + ε g t 1 , σ t 1 = α ( σ t 2 ) 2 + ( 1 α ) ( g t 1 ) 2

核心代码

def rmsprop(parameters, sqrs, lr, alpha):
    eps = 1e-10
    for param, sqr in zip(parameters, sqrs):
        sqr[:] = alpha * sqr + (1 - alpha) * param.grad.data ** 2
        div = lr / torch.sqrt(sqr + eps) * param.grad.data
        param.data = param.data - div

以下为利用RMSProp优化方法实现的MNIST手写体数字的识别

import numpy as np
import torch
from torchvision.datasets import MNIST # 导入 pytorch 内置的 mnist 数据
from torch.utils.data import DataLoader
from torch import nn
from torch.autograd import Variable
import time
import matplotlib.pyplot as plt
%matplotlib inline

def data_tf(x):
    x = np.array(x, dtype='float32') / 255
    x = (x - 0.5) / 0.5 # 标准化,这个技巧之后会讲到
    x = x.reshape((-1,)) # 拉平
    x = torch.from_numpy(x)
    return x

train_set = MNIST('./data', train=True, transform=data_tf, download=True) # 载入数据集,申明定义的数据变换
test_set = MNIST('./data', train=False, transform=data_tf, download=True)

# 定义 loss 函数
criterion = nn.CrossEntropyLoss()
train_data = DataLoader(train_set, batch_size=64, shuffle=True)
# 使用 Sequential 定义 3 层神经网络
net = nn.Sequential(
    nn.Linear(784, 200),
    nn.ReLU(),
    nn.Linear(200, 10),
)

# 初始化梯度平方项
sqrs = []
for param in net.parameters():
    sqrs.append(torch.zeros_like(param.data))

# 开始训练
losses = []
idx = 0
start = time.time() # 记时开始
for e in range(5):
    train_loss = 0
    for im, label in train_data:
        im = Variable(im)
        label = Variable(label)
        # 前向传播
        out = net(im)
        loss = criterion(out, label)
        # 反向传播
        net.zero_grad()
        loss.backward()
        rmsprop(net.parameters(), sqrs, 1e-3, 0.9) # 学习率设为 0.001,alpha 设为 0.9
        # 记录误差
        train_loss += loss.data[0]
        if idx % 30 == 0:
            losses.append(loss.data[0])
        idx += 1
    print('epoch: {}, Train Loss: {:.6f}'
          .format(e, train_loss / len(train_data)))
end = time.time() # 计时结束
print('使用时间: {:.5f} s'.format(end - start))

训练的结果如下所示:

epoch: 0, Train Loss: 0.363507
epoch: 1, Train Loss: 0.161640
epoch: 2, Train Loss: 0.120954
epoch: 3, Train Loss: 0.101136
epoch: 4, Train Loss: 0.085934
使用时间: 58.86966 s

可视化Loss函数

x_axis = np.linspace(0, 5, len(losses), endpoint=True)
plt.semilogy(x_axis, losses, label='alpha=0.9')
plt.legend(loc='best')

这里写图片描述
当然 pytorch 也内置了 rmsprop 的方法,非常简单,只需要调用 torch.optim.RMSprop() 就可以了,下面是例子

train_data = DataLoader(train_set, batch_size=64, shuffle=True)
# 使用 Sequential 定义 3 层神经网络
net = nn.Sequential(
    nn.Linear(784, 200),
    nn.ReLU(),
    nn.Linear(200, 10),
)

optimizer = torch.optim.RMSprop(net.parameters(), lr=1e-3, alpha=0.9)

# 开始训练

start = time.time() # 记时开始
for e in range(5):
    train_loss = 0
    for im, label in train_data:
        im = Variable(im)
        label = Variable(label)
        # 前向传播
        out = net(im)
        loss = criterion(out, label)
        # 反向传播
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        # 记录误差
        train_loss += loss.data[0]
    print('epoch: {}, Train Loss: {:.6f}'
          .format(e, train_loss / len(train_data)))
end = time.time() # 计时结束
print('使用时间: {:.5f} s'.format(end - start))

当然 pytorch 也内置了 rmsprop 的方法,非常简单,只需要调用 torch.optim.RMSprop() 就可以了,下面是例子当然 pytorch 也内置了 rmsprop 的方法,非常简单,只需要调用 torch.optim.RMSprop() 就可以了,下面是例子
3.Momentum
这里写图片描述
这里写图片描述
该方法源于了物理学上动量的概念,试想一下小球从山上滚下来,那么小球每次前进不仅和当前时刻外力对它做的功所决定(当前时刻参数的更新,即当前计算出来的梯度 L ( θ i ) )而且还受到惯性的作用(以往的参数更新)。因此该方法引入参数v用于表示动量, v t 1 表示此前的更新结果,而 v t 则表示当前的参数
θ 为待更新的参数,v代表过往的参数更新结果(可以看做是之前更新数据积累下来的惯性)
θ 0 v 0 = 0 则有以下公式:

v 1 = λ v 0 η L ( θ 0 ) v 2 = λ v 1 η L ( θ 1 ) . . . v t = λ v t 1 η L ( θ t 1 )

这里写图片描述

以下是核心代码:

def sgd_momentum(parameters, vs, lr, gamma):
    for param, v in zip(parameters, vs):
        v[:] = gamma * v + lr * param.grad.data
        param.data = param.data - v

以下为利用Momentum参数优化更新方法来训练一个三层神经网络的MNIST手写体数字识别

import numpy as np
import torch
from torchvision.datasets import MNIST # 导入 pytorch 内置的 mnist 数据
from torch.utils.data import DataLoader
from torch import nn
from torch.autograd import Variable
import time
import matplotlib.pyplot as plt
# %matplotlib inline

def data_tf(x):
    x = np.array(x, dtype='float32') / 255
    x = (x - 0.5) / 0.5 # 标准化,这个技巧之后会讲到
    x = x.reshape((-1,)) # 拉平
    x = torch.from_numpy(x)
    return x

# train_set = MNIST('./data', train=True, transform=data_tf, download=True) # 载入数据集,申明定义的数据变换
# test_set = MNIST('./data', train=False, transform=data_tf, download=True)

# 下载训练集 MNIST手写数字训练集
train_set = MNIST(root='/home/hk/Desktop/learn_pytorch/data', train=True, transform=data_tf, download=False)#data_tf auto normalization in the process of the transform

test_set = MNIST(root='/home/hk/Desktop/learn_pytorch/data', train=False, transform=data_tf, download=True)
# 定义 loss 函数
criterion = nn.CrossEntropyLoss()

def sgd_momentum(parameters, vs, lr, gamma):
    for param, v in zip(parameters, vs):
        v[:] = gamma * v + lr * param.grad.data
        param.data = param.data - v

train_data = DataLoader(train_set, batch_size=64, shuffle=True)
# 使用 Sequential 定义 3 层神经网络
net = nn.Sequential(
    nn.Linear(784, 200),
    nn.ReLU(),
    nn.Linear(200, 10),
)

# 将速度初始化为和参数形状相同的零张量
vs = []
for param in net.parameters():
    vs.append(torch.zeros_like(param.data))

# 开始训练
losses = []
idx = 0
start = time.time() # 记时开始
for e in range(5):
    train_loss = 0
    for im, label in train_data:
        im = Variable(im)
        label = Variable(label)
        # 前向传播
        out = net(im)
        loss = criterion(out, label)
        # 反向传播
        net.zero_grad()
        loss.backward()
        sgd_momentum(net.parameters(), vs, 1e-2, 0.9) # 使用的动量参数为 0.9,学习率 0.01
        # 记录误差
        train_loss += loss.data
        if idx % 20 == 0:
            losses.append(loss.data)
        idx+=1
    print('epoch: {}, Train Loss: {:.6f}'
          .format(e, train_loss / len(train_data)))
end = time.time() # 计时结束
print('使用时间: {:.5f} s'.format(end - start))

x_axis = np.linspace(0, 5, len(losses), endpoint=True)
plt.semilogy(x_axis, losses, label='adagrad')
plt.legend(loc='best')
plt.show()

以下为训练的结果

epoch: 0, Train Loss: 0.367609
epoch: 1, Train Loss: 0.168976
epoch: 2, Train Loss: 0.123189
epoch: 3, Train Loss: 0.100595
epoch: 4, Train Loss: 0.083965
使用时间: 69.73666 s

这里写图片描述
可以看到,加完动量之后 loss 能下降非常快,但是一定要小心学习率和动量参数,这两个值会直接影响到参数每次更新的幅度,所以可以多试几个值
当然,pytorch 内置了动量法的实现,非常简单,直接在 torch.optim.SGD(momentum=0.9) 即可,下面实现一下

train_data = DataLoader(train_set, batch_size=64, shuffle=True)
# 使用 Sequential 定义 3 层神经网络
net = nn.Sequential(
    nn.Linear(784, 200),
    nn.ReLU(),
    nn.Linear(200, 10),
)

optimizer = torch.optim.SGD(net.parameters(), lr=1e-2, momentum=0.9) # 加动量
# 开始训练
losses = []
idx = 0
start = time.time() # 记时开始
for e in range(5):
    train_loss = 0
    for im, label in train_data:
        im = Variable(im)
        label = Variable(label)
        # 前向传播
        out = net(im)
        loss = criterion(out, label)
        # 反向传播
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        # 记录误差
        train_loss += loss.data[0]
        if idx % 30 == 0: # 30 步记录一次
            losses.append(loss.data[0])
        idx += 1
    print('epoch: {}, Train Loss: {:.6f}'
          .format(e, train_loss / len(train_data)))
end = time.time() # 计时结束
print('使用时间: {:.5f} s'.format(end - start))

我们可以对比一下不加动量的随机梯度下降法

# 使用 Sequential 定义 3 层神经网络
net = nn.Sequential(
    nn.Linear(784, 200),
    nn.ReLU(),
    nn.Linear(200, 10),
)

optimizer = torch.optim.SGD(net.parameters(), lr=1e-2) # 不加动量
# 开始训练
losses1 = []
idx = 0
start = time.time() # 记时开始
for e in range(5):
    train_loss = 0
    for im, label in train_data:
        im = Variable(im)
        label = Variable(label)
        # 前向传播
        out = net(im)
        loss = criterion(out, label)
        # 反向传播
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        # 记录误差
        train_loss += loss.data[0]
        if idx % 30 == 0: # 30 步记录一次
            losses1.append(loss.data[0])
        idx += 1
    print('epoch: {}, Train Loss: {:.6f}'
          .format(e, train_loss / len(train_data)))
end = time.time() # 计时结束
print('使用时间: {:.5f} s'.format(end - start))

4.Adam
这里写图片描述
Adam是RMSProp和Momentum的结合,它是另一种自适应学习率的方法。它利用过去的梯度( m t )和过去的平方梯度( v t )动态调整每个参数的学习率。Adam的优点主要在于经过偏置校正后,每一次迭代学习率都有个确定范围,使得参数比较平稳。
参数说明:
α 是最终参数 θ 更新的步幅
β 1 , β 2
θ 0
Adam的具体伪代码如下所示:

m 0 0 v 0 0 t 0

while θ t is not converged do
t t + 1 g t f ( θ ) m t β 1 m t 1 + ( 1 β 1 ) g t v t β 2 v t 1 + ( 1 β 2 ) g t 2 m ^ t m t 1 β 1 t v ^ t v t 1 β 2 t θ t θ t 1 α m ^ t v ^ t + ε

end while
return θ t

以下为核心代码:

def adam(parameters, vs, sqrs, lr, t, beta1=0.9, beta2=0.999):
    eps = 1e-8
    for param, v, sqr in zip(parameters, vs, sqrs):
        v[:] = beta1 * v + (1 - beta1) * param.grad.data
        sqr[:] = beta2 * sqr + (1 - beta2) * param.grad.data ** 2
        v_hat = v / (1 - beta1 ** t)
        s_hat = sqr / (1 - beta2 ** t)
        param.data = param.data - lr * v_hat / torch.sqrt(s_hat + eps)

以下为Adam优化方法实现的有三层的网络MNIST手写体数字识别

import numpy as np
import torch
from torchvision.datasets import MNIST # 导入 pytorch 内置的 mnist 数据
from torch.utils.data import DataLoader
from torch import nn
from torch.autograd import Variable
import time
import matplotlib.pyplot as plt
%matplotlib inline

def data_tf(x):
    x = np.array(x, dtype='float32') / 255
    x = (x - 0.5) / 0.5 # 标准化,这个技巧之后会讲到
    x = x.reshape((-1,)) # 拉平
    x = torch.from_numpy(x)
    return x

train_set = MNIST('./data', train=True, transform=data_tf, download=True) # 载入数据集,申明定义的数据变换
test_set = MNIST('./data', train=False, transform=data_tf, download=True)

# 定义 loss 函数
criterion = nn.CrossEntropyLoss()
train_data = DataLoader(train_set, batch_size=64, shuffle=True)
# 使用 Sequential 定义 3 层神经网络
net = nn.Sequential(
    nn.Linear(784, 200),
    nn.ReLU(),
    nn.Linear(200, 10),
)

# 初始化梯度平方项和动量项
sqrs = []
vs = []
for param in net.parameters():
    sqrs.append(torch.zeros_like(param.data))
    vs.append(torch.zeros_like(param.data))
t = 1
# 开始训练
losses = []
idx = 0

start = time.time() # 记时开始
for e in range(5):
    train_loss = 0
    for im, label in train_data:
        im = Variable(im)
        label = Variable(label)
        # 前向传播
        out = net(im)
        loss = criterion(out, label)
        # 反向传播
        net.zero_grad()
        loss.backward()
        adam(net.parameters(), vs, sqrs, 1e-3, t) # 学习率设为 0.001
        t += 1
        # 记录误差
        train_loss += loss.data
        if idx % 30 == 0:
            losses.append(loss.data)
        idx += 1
    print('epoch: {}, Train Loss: {:.6f}'
          .format(e, train_loss / len(train_data)))
end = time.time() # 计时结束
print('使用时间: {:.5f} s'.format(end - start))
x_axis = np.linspace(0, 5, len(losses), endpoint=True)
plt.semilogy(x_axis, losses, label='adam')
plt.legend(loc='best')

以下为运行的结果

epoch: 0, Train Loss: 0.372057
epoch: 1, Train Loss: 0.186132
epoch: 2, Train Loss: 0.132870
epoch: 3, Train Loss: 0.107864
epoch: 4, Train Loss: 0.091208
使用时间: 85.96051 s

这里写图片描述
当然 pytorch 中也内置了 adam 的实现,只需要调用 torch.optim.Adam(),下面是例子

train_data = DataLoader(train_set, batch_size=64, shuffle=True)
# 使用 Sequential 定义 3 层神经网络
net = nn.Sequential(
    nn.Linear(784, 200),
    nn.ReLU(),
    nn.Linear(200, 10),
)

optimizer = torch.optim.Adam(net.parameters(), lr=1e-3)

# 开始训练
start = time.time() # 记时开始
for e in range(5):
    train_loss = 0
    for im, label in train_data:
        im = Variable(im)
        label = Variable(label)
        # 前向传播
        out = net(im)
        loss = criterion(out, label)
        # 反向传播
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        # 记录误差
        train_loss += loss.data[0]
    print('epoch: {}, Train Loss: {:.6f}'
          .format(e, train_loss / len(train_data)))
end = time.time() # 计时结束
print('使用时间: {:.5f} s'.format(end - start))

在学习完以上四种参数的优化方法之后我们在这里对四种方法进行对比,观察loss函数的变化情况(loss越小并不代表最终的模型效果越好)
以下为四中算法进行比较的代码

import numpy as np
import torch
from torchvision.datasets import MNIST # 导入 pytorch 内置的 mnist 数据
from torch.utils.data import DataLoader
from torch import nn
from torch.autograd import Variable
import time
import matplotlib.pyplot as plt
# %matplotlib inline

def data_tf(x):
    x = np.array(x, dtype='float32') / 255
    x = (x - 0.5) / 0.5 # 标准化,这个技巧之后会讲到
    x = x.reshape((-1,)) # 拉平
    x = torch.from_numpy(x)
    return x

# 下载训练集 MNIST手写数字训练集
train_set = MNIST(root='/home/hk/Desktop/learn_pytorch/data', train=True, transform=data_tf, download=False)#data_tf auto normalization in the process of the transform

test_set = MNIST(root='/home/hk/Desktop/learn_pytorch/data', train=False, transform=data_tf, download=True)
# 定义 loss 函数
criterion = nn.CrossEntropyLoss()

def sgd_momentum(parameters, vs, lr, gamma):
    for param, v in zip(parameters, vs):
        v[:] = gamma * v + lr * param.grad.data
        param.data = param.data - v

train_data = DataLoader(train_set, batch_size=64, shuffle=True)
# 使用 Sequential 定义 3 层神经网络
net = nn.Sequential(
    nn.Linear(784, 200),
    nn.ReLU(),
    nn.Linear(200, 10),
)

# 将速度初始化为和参数形状相同的零张量
vs = []
for param in net.parameters():
    vs.append(torch.zeros_like(param.data))

# 开始训练
print("*"*10)
losses = []
idx = 0
start = time.time() # 记时开始
for e in range(5):
    train_loss = 0
    for im, label in train_data:
        im = Variable(im)
        label = Variable(label)
        # 前向传播
        out = net(im)
        loss = criterion(out, label)
        # 反向传播
        net.zero_grad()
        loss.backward()
        sgd_momentum(net.parameters(), vs, 1e-2, 0.9) # 使用的动量参数为 0.9,学习率 0.01
        # 记录误差
        train_loss += loss.data
        if idx % 30 == 0:
            losses.append(loss.data)
        idx+=1
    print('epoch: {}, Train Loss: {:.6f}'
          .format(e, train_loss / len(train_data)))
end = time.time() # 计时结束
print('Momentum:使用时间: {:.5f} s'.format(end - start))


#SGD
optimizer = torch.optim.SGD(net.parameters(), lr=1e-2) # 不加动量
# 开始训练
print("*"*10)
losses1 = []
idx = 0
start = time.time() # 记时开始
for e in range(5):
    train_loss = 0
    for im, label in train_data:
        im = Variable(im)
        label = Variable(label)
        # 前向传播
        out = net(im)
        loss = criterion(out, label)
        # 反向传播
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        # 记录误差
        train_loss += loss.data
        if idx % 30 == 0: # 30 步记录一次
            losses1.append(loss.data)
        idx += 1
    print('epoch: {}, Train Loss: {:.6f}'
          .format(e, train_loss / len(train_data)))
end = time.time() # 计时结束
print('SGD:使用时间: {:.5f} s'.format(end - start))


#Adam
print("*"*10)
optimizer = torch.optim.Adam(net.parameters(), lr=1e-3)
losses2 = []
idx = 0
start = time.time() # 记时开始
for e in range(5):
    train_loss = 0
    for im, label in train_data:
        im = Variable(im)
        label = Variable(label)
        # 前向传播
        out = net(im)
        loss = criterion(out, label)
        # 反向传播
        net.zero_grad()
        loss.backward()
        optimizer.step()
        # 记录误差
        train_loss += loss.data
        if idx % 30 == 0:
            losses2.append(loss.data)
        idx += 1
    print('epoch: {}, Train Loss: {:.6f}'
          .format(e, train_loss / len(train_data)))
end = time.time() # 计时结束
print('Adam:使用时间: {:.5f} s'.format(end - start))

#RMSProp
print("*"*10)
optimizer = torch.optim.RMSprop(net.parameters(), lr=1e-3, alpha=0.9)
losses3 = []
idx = 0
start = time.time() # 记时开始
for e in range(5):
    train_loss = 0
    for im, label in train_data:
        im = Variable(im)
        label = Variable(label)
        # 前向传播
        out = net(im)
        loss = criterion(out, label)
        # 反向传播
        net.zero_grad()
        loss.backward()
        optimizer.step()
        # 记录误差
        train_loss += loss.data
        if idx % 30 == 0:
            losses3.append(loss.data)
        idx += 1
    print('epoch: {}, Train Loss: {:.6f}'
          .format(e, train_loss / len(train_data)))
end = time.time() # 计时结束
print('RMSProp:使用时间: {:.5f} s'.format(end - start))

#Adagrad
print("*"*10)
optimizer = torch.optim.Adagrad(net.parameters(), lr=1e-2)
losses4 = []
idx = 0
start = time.time() # 记时开始
for e in range(5):
    train_loss = 0
    for im, label in train_data:
        im = Variable(im)
        label = Variable(label)
        # 前向传播
        out = net(im)
        loss = criterion(out, label)
        # 反向传播
        net.zero_grad()
        loss.backward()
        optimizer.step()
        # 记录误差
        train_loss += loss.data
        if idx % 30 == 0:
            losses4.append(loss.data)
        idx += 1
    print('epoch: {}, Train Loss: {:.6f}'
          .format(e, train_loss / len(train_data)))
end = time.time() # 计时结束
print('Adagrad:使用时间: {:.5f} s'.format(end - start))

x_axis = np.linspace(0, 5, len(losses), endpoint=True)
plt.semilogy(x_axis, losses, label='Momentum:alpha=0.9')
plt.semilogy(x_axis, losses1, label='SGD')
plt.semilogy(x_axis, losses2, label='Adam')
plt.semilogy(x_axis, losses3, label='RMSProp:alpha=0.9')
plt.semilogy(x_axis, losses4, label='Adagrad')

plt.legend(loc='best')
plt.show()

以下是代码的运行结果

**********
epoch: 0, Train Loss: 0.370089
epoch: 1, Train Loss: 0.171468
epoch: 2, Train Loss: 0.123055
epoch: 3, Train Loss: 0.098832
epoch: 4, Train Loss: 0.085154
Momentum:使用时间: 78.35162 s
**********
epoch: 0, Train Loss: 0.056292
epoch: 1, Train Loss: 0.052914
epoch: 2, Train Loss: 0.051503
epoch: 3, Train Loss: 0.050107
epoch: 4, Train Loss: 0.049181
SGD:使用时间: 55.99813 s
**********
epoch: 0, Train Loss: 0.109644
epoch: 1, Train Loss: 0.087866
epoch: 2, Train Loss: 0.080869
epoch: 3, Train Loss: 0.070733
epoch: 4, Train Loss: 0.063566
Adam:使用时间: 81.13758 s
**********
epoch: 0, Train Loss: 0.062457
epoch: 1, Train Loss: 0.057542
epoch: 2, Train Loss: 0.054834
epoch: 3, Train Loss: 0.051196
epoch: 4, Train Loss: 0.048507
RMSProp:使用时间: 64.00369 s
**********
epoch: 0, Train Loss: 0.061198
epoch: 1, Train Loss: 0.014729
epoch: 2, Train Loss: 0.011167
epoch: 3, Train Loss: 0.009214
epoch: 4, Train Loss: 0.007709
Adagrad:使用时间: 53.32201 s

这里写图片描述
最后两个动画展示多种不同的参数更新方法的过程对比,可以直观的观察到参数更新的过程。
这里写图片描述
这里写图片描述
References:
http://ruder.io/optimizing-gradient-descent/index.html
https://github.com/L1aoXingyu/code-of-learn-deep-learning-with-pytorch

猜你喜欢

转载自blog.csdn.net/qq_37053885/article/details/81605365