Tensorflow2.0学习(15):深度可分离卷积

深度可分离卷积

  • 参考文章.
  • 通过轻微的损失换取参数量的减少。

实战

  • 模型的构建:将Conv2D改为SeparableConv2D
# tf.keras.models.Sequential() 构建模型 
# 构建深度神经网络
model = keras.models.Sequential()
# 添加卷积层
# filter:卷积核的个数, kernel_size:卷积核的尺寸, padding: 是否填充原图像
# avtivation: 激活函数, input_shape:输入的图像的大小,为1通道
# 输入层还是普通的卷积层
model.add(keras.layers.Conv2D(filters=32, kernel_size=3,
                             padding='same',
                             activation="selu",
                             input_shape=(28, 28 ,1)))
model.add(keras.layers.SeparableConv2D(filters=3, kernel_size=3,
                             padding='same',
                             activation="selu"
                             ))
# 添加池化层
# 经过池化层后,图像长宽各减少1/2,面积减少1/4,因此会造成图像的损失
# 所以在之后的卷积层中,卷积核的个数翻倍以缓解这种损失
model.add(keras.layers.MaxPool2D(pool_size=2))
model.add(keras.layers.SeparableConv2D(filters=64, kernel_size=3,
                             padding='same',
                             activation="selu"
                             ))
model.add(keras.layers.SeparableConv2D(filters=64, kernel_size=3,
                             padding='same',
                             activation="selu"
                             ))
model.add(keras.layers.MaxPool2D(pool_size=2))


model.add(keras.layers.SeparableConv2D(filters=128, kernel_size=3,
                             padding='same',
                             activation="selu"
                             ))
model.add(keras.layers.SeparableConv2D(filters=128, kernel_size=3,
                             padding='same',
                             activation="selu"
                             ))
model.add(keras.layers.MaxPool2D(pool_size=2))
# 将输出展平
model.add(keras.layers.Flatten())
# 连接全连接层
model.add(keras.layers.Dense(128,activation="selu"))
model.add(keras.layers.Dense(10,activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
             optimizer="sgd",
             metrics = ["accuracy"])
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 28, 28, 32)        320       
_________________________________________________________________
separable_conv2d (SeparableC (None, 28, 28, 3)         387       
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 14, 14, 3)         0         
_________________________________________________________________
separable_conv2d_1 (Separabl (None, 14, 14, 64)        283       
_________________________________________________________________
separable_conv2d_2 (Separabl (None, 14, 14, 64)        4736      
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 7, 7, 64)          0         
_________________________________________________________________
separable_conv2d_3 (Separabl (None, 7, 7, 128)         8896      
_________________________________________________________________
separable_conv2d_4 (Separabl (None, 7, 7, 128)         17664     
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 3, 3, 128)         0         
_________________________________________________________________
flatten (Flatten)            (None, 1152)              0         
_________________________________________________________________
dense (Dense)                (None, 128)               147584    
_________________________________________________________________
dense_1 (Dense)              (None, 10)                1290      
=================================================================
Total params: 181,160
Trainable params: 181,160
Non-trainable params: 0
_________________________________________________________________
  • 训练等
# 开启训练
# epochs:训练集遍历10次
# validation_data:每隔一段时间就会验证集验证
# 会发现loss和accuracy到后面一直不变,因为用sgd梯度下降法会导致陷入局部最小值点
# 因此将loss函数的下降方法改为 adam

# callbcaks:回调函数,在每次迭代之后自动调用一些进程,如判断loss值是否达到要求
# 因此callbacks需要加在训练的过程中,即加在fit中
# 此处使用 Tensorboard, earlystopping, ModelCheckpoint 回调函数

# Tensorboard需要一个文件夹,ModelCheckpoint需要一个文件名
# 因此先创建一个文件夹和文件名

logdir = os.path.join("Separable_cnn-selu-callbacks")
if not os.path.exists(logdir):
    os.mkdir(logdir)
# 在callbacks文件夹下创建文件。c=os.path.join(a,b),c=a/b
output_model_file = os.path.join(logdir,"fashion_mnist_model.h5")


callbacks = [
    keras.callbacks.TensorBoard(log_dir=logdir),
    keras.callbacks.ModelCheckpoint(output_model_file,
                                   save_best_only=True),
    keras.callbacks.EarlyStopping(patience=5, min_delta=1e-3),
]
history = model.fit(x_train_scaled, y_train, epochs=10,
                    validation_data=(x_valid_scaled, y_valid),
                   callbacks = callbacks)
# 查看tensorboard:
# 1.在所在的环境下,进入callbacks文件夹所在的目录
# 2.输入:tensorboard --logdir="callbacks"
# 3.打开浏览器:输入localhost:(端口号)
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 68s 1ms/sample - loss: 2.2564 - accuracy: 0.1510 - val_loss: 1.3870 - val_accuracy: 0.5138
Epoch 2/10
55000/55000 [==============================] - 76s 1ms/sample - loss: 0.8879 - accuracy: 0.6676 - val_loss: 0.7468 - val_accuracy: 0.7254
Epoch 3/10
55000/55000 [==============================] - 74s 1ms/sample - loss: 0.7308 - accuracy: 0.7241 - val_loss: 0.6707 - val_accuracy: 0.7560
Epoch 4/10
55000/55000 [==============================] - 79s 1ms/sample - loss: 0.6656 - accuracy: 0.7478 - val_loss: 0.6315 - val_accuracy: 0.7652
Epoch 5/10
55000/55000 [==============================] - 79s 1ms/sample - loss: 0.6121 - accuracy: 0.7685 - val_loss: 0.5735 - val_accuracy: 0.7790
Epoch 6/10
55000/55000 [==============================] - 72s 1ms/sample - loss: 0.5534 - accuracy: 0.7925 - val_loss: 0.5291 - val_accuracy: 0.8036
Epoch 7/10
55000/55000 [==============================] - 81s 1ms/sample - loss: 0.4927 - accuracy: 0.8157 - val_loss: 0.4582 - val_accuracy: 0.8362
Epoch 8/10
55000/55000 [==============================] - 76s 1ms/sample - loss: 0.4468 - accuracy: 0.8342 - val_loss: 0.4122 - val_accuracy: 0.8520
Epoch 9/10
55000/55000 [==============================] - 74s 1ms/sample - loss: 0.4169 - accuracy: 0.8452 - val_loss: 0.3973 - val_accuracy: 0.8550
Epoch 10/10
55000/55000 [==============================] - 77s 1ms/sample - loss: 0.3944 - accuracy: 0.8541 - val_loss: 0.3848 - val_accuracy: 0.8616
def plot_learning_curves(history):
    # 将history.history转换为dataframe格式
    pd.DataFrame(history.history).plot(figsize=(8, 5 ))
    plt.grid(True)
    # gca:get current axes,gcf: get current figure
    plt.gca().set_ylim(0, 3)
    plt.show()
plot_learning_curves(history)

# 前期loss的基本不变化的原因
# 1.参数众多,训练不充分
# 2.梯度消失

model.evaluate(x_test_scaled, y_test, verbose=2)

10000/10000 - 4s - loss: 0.4249 - accuracy: 0.8449

[0.42493420658111575, 0.8449]
发布了35 篇原创文章 · 获赞 3 · 访问量 2496

猜你喜欢

转载自blog.csdn.net/Smile_mingm/article/details/104570250
今日推荐