【Kaggle-MNIST之路】两层的神经网络Pytorch(改进版)(二)

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/a19990412/article/details/84069941

简述

【Kaggle-MNIST之路】两层的神经网络Pytorch(四行代码的模型)

基于我的上一篇文章改进。
其实就是把损失函数调整了一下。

从CrossEntroyLoss到MultiMarginLoss。

  • 得分:0.81
  • 排名:2609

代码

import pandas as pd
import torch.utils.data as data
import torch
import torch.nn as nn

file = './all/train.csv'
LR = 0.01


class MNISTCSVDataset(data.Dataset):

    def __init__(self, csv_file, Train=True):
        self.dataframe = pd.read_csv(csv_file, iterator=True)
        self.Train = Train

    def __len__(self):
        if self.Train:
            return 42000
        else:
            return 28000

    def __getitem__(self, idx):
        data = self.dataframe.get_chunk(100)
        ylabel = data['label'].as_matrix().astype('float')
        xdata = data.ix[:, 1:].as_matrix().astype('float')
        return ylabel, xdata


mydataset = MNISTCSVDataset(file)

train_loader = torch.utils.data.DataLoader(mydataset, batch_size=1, shuffle=True)

net = nn.Sequential(
    nn.Linear(28 * 28, 100),
    nn.ReLU(),
    nn.Linear(100, 10)
)

loss_function = nn.MultiMarginLoss()
optimizer = torch.optim.Adam(net.parameters(), lr=LR)
for step, (yl, xd) in enumerate(train_loader):
    output = net(xd.squeeze().float())
    yl = yl.long()
    loss = loss_function(output, yl.squeeze())
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()

    if step % 20 == 0:
        print('step %d' % step, loss)

torch.save(net, 'divided-net.pkl')

猜你喜欢

转载自blog.csdn.net/a19990412/article/details/84069941