『Keras』Keras 采用 finetune 时冻结部分网络层

一、使用背景

在使用 keras 进行 finetune 有时需要冻结一些网络层加速训练

keras中提供冻结单个层的方法:layer.trainable = False

二、冻结 model 所有网络层

base_model = DenseNet121(include_top=False, weights="imagenet", input_shape=(224, 224, 3))
for layer in base_model.layers:
    layer.trainable = False

三、冻结 model 某些网络层

在 keras 中除了从 model.layers 取得 layer,我们还可以通过 model.get_layer(layer_name) 获取。

base_model = VGG19(weights='imagenet')
base_model.get_layer('block4_pool').trainable = False

如何知道 layer_name?

答案是通过 model.summary() 输出一下
如下所示,最左面一列就是 layer_name(注意是括号外面的)

__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to
==================================================================================================
input_1 (InputLayer)            (None, 224, 224, 3)  0
__________________________________________________________________________________________________
NASNet (Model)                  (None, 7, 7, 1056)   4269716     input_1[0][0]
__________________________________________________________________________________________________
resnet50 (Model)                (None, 7, 7, 2048)   23587712    input_1[0][0]
__________________________________________________________________________________________________
densenet121 (Model)             (None, 7, 7, 1024)   7037504     input_1[0][0]
__________________________________________________________________________________________________
global_average_pooling2d_1 (Glo (None, 1056)         0           NASNet[1][0]
__________________________________________________________________________________________________
global_average_pooling2d_2 (Glo (None, 2048)         0           resnet50[1][0]
__________________________________________________________________________________________________
global_average_pooling2d_3 (Glo (None, 1024)         0           densenet121[1][0]
__________________________________________________________________________________________________
concatenate_5 (Concatenate)     (None, 4128)         0           global_average_pooling2d_1[0][0]
                                                                 global_average_pooling2d_2[0][0]
                                                                 global_average_pooling2d_3[0][0]
__________________________________________________________________________________________________
dropout_1 (Dropout)             (None, 4128)         0           concatenate_5[0][0]
__________________________________________________________________________________________________
classifier (Dense)              (None, 200)          825800      dropout_1[0][0]
==================================================================================================
Total params: 35,720,732
Trainable params: 825,800
Non-trainable params: 34,894,932
__________________________________________________________________________________________________
None

参考链接

  1. https://xiaosongshine.blog.csdn.net/article/details/89263191

猜你喜欢

转载自blog.csdn.net/libo1004/article/details/110882379
今日推荐