Keras快速上手 ——学习笔记(一)端到端的MNIST训练数字识别

端到端的MNIST训练数字识别

下面介绍端到端的MNIST训练数字识别过程。
这个数据集是由LeCun Yang教授和他团队整理的,囊括了6万个训练集和1万个测试集。每个样本都是32×32的像素值,并且是黑白的,没有R、G、B三层。我们要做的是把每个图片分到0~9类别中。
下图是一些手写数字的样本
这里写图片描述
接下来用Keras搭建卷积网络训练模型。

1、导入数据和keras卷积模块

快速开始序贯(Sequential)模型 序贯模型是多个网络层的线性堆叠,也就是“一条路走到黑”。 可以通过向 Sequential 模型传递一个layer的list来构造该模型: from keras.models import Sequential from keras.layers import Dense, Activation model = Sequential([ De

import numpy as np
from keras.datasets import mnist  # 数据集
# 导入keras的卷积模块
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers.convolutional import Conv2D, MaxPooling2D

先读入数据,看一下数据集长什么样子
没有下载过的可能要等一段时间下载

(X_train, Y_train), (X_test, Y_test) = mnist.load_data()
print(X_train[0].shape)
print(Y_train[0])

输出:

(28, 28)
5

3、对数据进行预处理

(1)对数据的维度进行转化,为我们需要的维度,其中astype是必须的,规定了浮点精度
(2)数据归一化 因为是图片,图层是0-255,所以就直接255归一
(3)对数据集进行热点化(one_hot)

X_train = X_train.reshape(X_train.shape[0], 28, 28, 1).astype('float32')
X_test = X_test.reshape(X_test.shape[0], 28, 28, 1).astype('float32')

X_train /= 255
X_test /= 255

def tran_y(y):
    y_ohe = np.zeros(10)
    y_ohe[y] = 1
    return y_ohe

Y_train_ohe = np.array([tran_y(Y_train[i]) for i in range(len(Y_train))])
Y_test_ohe = np.array([tran_y(Y_test[i]) for i in range(len(Y_test))])

4、构建模型

选用Sequential方法构建,然后不断放入神经层。
(1)第一层是卷积层,过滤器filters为64个,过滤器大小为3x3,步幅为1x1,填充成相同的大小,输入层的形状为(28, 28, 1),在第一次输入数据的时候这个参数input_shape是必须的,后面的不再需要。激活函数选用relu。
(2)第二层是最大池,池化器大小为2x2。
(3)第三个是随机失活dropout,目的是为了防止过拟合,选用0.5意思是随机失活该层50%的节点。
(4)接下来重复构建。
(5)最后展平节点

model = Sequential()

model.add(Conv2D(filters=64, kernel_size=(3, 3), strides=(1, 1), padding="same", input_shape=(28, 28, 1), activation="relu"))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.5)

model.add(Conv2D(filters=128, kernel_size=(3, 3), strides=(1, 1), padding="same", activation="relu"))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.5))

model.add(Conv2D(filters=256, kernel_size=(3, 3), strides=(1, 1), padding="same", activation="relu"))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.5))

model.add(Flatten())

构建全连接神经网络

model.add(Dense(128, activation = 'relu'))
model.add(Dense(64, activation = 'relu'))
model.add(Dense(32, activation = 'relu'))
model.add(Dense(10, activation = 'softmax'))

添加损失函数,其中categorical_crossentropy是softmax相对应的损失函数,,在对稀疏的目标值预测时有用,优化器选择adagrad
adagrad说明链接https://blog.csdn.net/tsyccnh/article/details/76769232
metrics为性能评估方法,选择accuracy——准确率

model.compile(loss='categorical_crossentropy', optimizer= 'adagrad', metrics=['accuracy'])

指定训练集X,训练集Y, epochs为循环次数,batch_size为batch的大小

model.fit(X_train, Y_train_ohe, validation_data = (X_test, Y_test_ohe), epochs = 20, batch_size = 128)

运行结果:

Train on 60000 samples, validate on 10000 samples
Epoch 1/20
60000/60000 [==============================] - 293s 5ms/step - loss: 1.3511 - acc: 0.5497 - val_loss: 0.5715 - val_acc: 0.8184
Epoch 2/20
60000/60000 [==============================] - 291s 5ms/step - loss: 0.7624 - acc: 0.7473 - val_loss: 0.4115 - val_acc: 0.8735
Epoch 3/20
60000/60000 [==============================] - 294s 5ms/step - loss: 0.6173 - acc: 0.7974 - val_loss: 0.3401 - val_acc: 0.8921
Epoch 4/20
60000/60000 [==============================] - 288s 5ms/step - loss: 0.5401 - acc: 0.8242 - val_loss: 0.3213 - val_acc: 0.8958
Epoch 5/20
60000/60000 [==============================] - 293s 5ms/step - loss: 0.4792 - acc: 0.8425 - val_loss: 0.2857 - val_acc: 0.9092
Epoch 6/20
60000/60000 [==============================] - 285s 5ms/step - loss: 0.4468 - acc: 0.8547 - val_loss: 0.2473 - val_acc: 0.9237
Epoch 7/20
60000/60000 [==============================] - 281s 5ms/step - loss: 0.4083 - acc: 0.8675 - val_loss: 0.2283 - val_acc: 0.9290
Epoch 8/20
60000/60000 [==============================] - 285s 5ms/step - loss: 0.3797 - acc: 0.8749 - val_loss: 0.2044 - val_acc: 0.9363
Epoch 9/20
60000/60000 [==============================] - 283s 5ms/step - loss: 0.3517 - acc: 0.8879 - val_loss: 0.1874 - val_acc: 0.9405
Epoch 10/20
60000/60000 [==============================] - 282s 5ms/step - loss: 0.3304 - acc: 0.8937 - val_loss: 0.1720 - val_acc: 0.9458
Epoch 11/20
60000/60000 [==============================] - 282s 5ms/step - loss: 0.3086 - acc: 0.9004 - val_loss: 0.1588 - val_acc: 0.9507
Epoch 12/20
60000/60000 [==============================] - 281s 5ms/step - loss: 0.2918 - acc: 0.9063 - val_loss: 0.1617 - val_acc: 0.9490
Epoch 13/20
60000/60000 [==============================] - 281s 5ms/step - loss: 0.2746 - acc: 0.9129 - val_loss: 0.1451 - val_acc: 0.9539
Epoch 14/20
60000/60000 [==============================] - 280s 5ms/step - loss: 0.2626 - acc: 0.9163 - val_loss: 0.1374 - val_acc: 0.9548
Epoch 15/20
60000/60000 [==============================] - 280s 5ms/step - loss: 0.2471 - acc: 0.9204 - val_loss: 0.1291 - val_acc: 0.9582
Epoch 16/20
60000/60000 [==============================] - 283s 5ms/step - loss: 0.2384 - acc: 0.9239 - val_loss: 0.1239 - val_acc: 0.9605
Epoch 17/20
60000/60000 [==============================] - 283s 5ms/step - loss: 0.2299 - acc: 0.9262 - val_loss: 0.1191 - val_acc: 0.9622
Epoch 18/20
60000/60000 [==============================] - 283s 5ms/step - loss: 0.2228 - acc: 0.9295 - val_loss: 0.1122 - val_acc: 0.9644
Epoch 19/20
60000/60000 [==============================] - 283s 5ms/step - loss: 0.2095 - acc: 0.9319 - val_loss: 0.1061 - val_acc: 0.9661
Epoch 20/20
60000/60000 [==============================] - 283s 5ms/step - loss: 0.2075 - acc: 0.9335 - val_loss: 0.1069 - val_acc: 0.9651

在测试集上评价模型的准确率

扫描二维码关注公众号,回复: 2624021 查看本文章
scores = model.evaluate(X_test, Y_test_ohe, verbose=0)
print("Accuracy: %.2f%%"% (scores[1]*100))

结果为

Accuracy: 96.51%

低于书上99%的准确率,推测可能是环境不一样(CPU笔记本),详细原因不明,知道的朋友帮忙解答一下

猜你喜欢

转载自blog.csdn.net/m0_38106113/article/details/81459546
今日推荐