深度神经网络调参之权值初始化

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/m0_37490039/article/details/79388014

在玩dota类游戏的时候,局势情况不好的情况下,新手的第一反应就是队友傻逼,然后想着就投降。老玩家呢往往先看看是不是自身问题。不仅游戏,很对地方都是如此,新人在使用神经网络的时候,在发现最终的结果不好的情况下,想着就是模型算法本身的问题,然后换成其他的模型算法,这样往往结果并没有改善。老手们往往会检查许多方面,看看数据本身是否有异常,网络结构编写是否有问题,是否出现了过拟合现象等等。

本文主要谈谈网络层中的权值如何初始化比较好。

权值初始值

在构造神经网络的时候,我们需要初始化每层的权值参数W和偏移值b。通常我们习惯性地选择使用均值为0,方差为1的高斯分布去初始化它们。这样真的好吗?我们来看一个例子。现在假设我们输入层有1000个神经元,然后用上面的方法去初始化输入层到第一个隐藏层的W,b。我们再做一个简单粗暴的假设,假设输入层的神经元里的数值一半是1,一半是0。我们来看隐藏层的神经元的输入怎么求: z = j w j x j + b , x j 表示输入层的输出,由于一半是0,一半是1,很容易的出 z 就等于501个服从高斯分布的随机变量想加(500个权值参数和b),所以 z 本身服从均值为0,标准方差是 501 22.4 的高斯分布。来看看这个分布长啥样
这里写图片描述

这个分布“又矮又胖“,原因就是方差太大了,可以看出我们的 z 取值可能比较大 z 1 ,也可能比较小 z 1 。如果我们选sigmoid作为激活函数,先来看看sigmoid长啥样

这里写图片描述

不难发现我们的 z 取值较大或者较小的时候,激活函数的输出就变得很贴近0或者1,而导数也就趋向于0。熟悉BP算法的同学都知道,我们W和b的梯度式中含有 σ ( z ) ,这就使得更新后的W,b变化很小,反过来,W和b没怎么变化, z 变化就更小了。这样的神经元就陷入了饱和状态(saturated)。并且,这样的神经元对其他网络层的影响也变得很小,导致整个网络的损失函数基本没什么变化,训练变得相当缓慢。

那么有什么好的初始化可以避免这样的问题呢,避免出现这样的饱和神经元,也就避免训练过于缓慢的问题。
假设我们-个神经元有 n i n 个权值参数,那么我们把它们初始化成服从均值为0,标准方差为 1 / n i n 的高斯分布,偏移值b还是初始化成服从均值为0,标准方差为1的标准高斯分布。我们还假设输入层一半是0,一半是1,那么隐藏层的输入 z 此时就服从均值为0,标准方差为 500 / 1000 + 1 = 3 / 2 = 1.22 的高斯分布。这样分布如下图

这里写图片描述

明显比刚才“瘦“多了, z 的取值基本上集中在0附近,也不太容易出现神经元饱和的情况了。

理论说了那么多,还是那实验结果来说明问题把。
我们采用的是MNIST数据集,只采用单个隐藏层,30个神经元,batch_size为10,learning_rate为0.1,其他代码暂且关心,只关心初始化部分。

我们首先把参数初始化成标准高斯分布

 weight1 = tf.Variable(tf.truncated_normal([INPUT_NODE, HIDDEN_NODE], stddev=1.0))
 biases1 = tf.Variable(tf.truncated_normal([HIDDEN_NODE], stddev=1.0))   
 weight2 = tf.Variable(tf.truncated_normal([HIDDEN_NODE, OUTPUT_NODE], stddev=1.0))
 biases2 = tf.Variable(tf.truncated_normal([OUTPUT_NODE], stddev=1.0))

我们看下经过1000次training steps后在validation数据集和test数据集上的表现

After 0 training steps, validation accuracy is 0.045
After 10 training steps, validation accuracy is 0.2674
After 20 training steps, validation accuracy is 0.1942
After 30 training steps, validation accuracy is 0.353
After 40 training steps, validation accuracy is 0.4726
After 50 training steps, validation accuracy is 0.5402
After 60 training steps, validation accuracy is 0.5476
After 70 training steps, validation accuracy is 0.566
After 80 training steps, validation accuracy is 0.5436
After 90 training steps, validation accuracy is 0.645
After 100 training steps, validation accuracy is 0.6898
......
After 800 training steps, validation accuracy is 0.8568
After 810 training steps, validation accuracy is 0.8586
After 820 training steps, validation accuracy is 0.8736
After 830 training steps, validation accuracy is 0.8652
After 840 training steps, validation accuracy is 0.8658
After 850 training steps, validation accuracy is 0.8386
After 860 training steps, validation accuracy is 0.8772
After 870 training steps, validation accuracy is 0.8606
After 880 training steps, validation accuracy is 0.856
After 890 training steps, validation accuracy is 0.8584
After 900 training steps, validation accuracy is 0.8752
After 910 training steps, validation accuracy is 0.8672
After 920 training steps, validation accuracy is 0.8666
After 930 training steps, validation accuracy is 0.8604
After 940 training steps, validation accuracy is 0.8662
After 950 training steps, validation accuracy is 0.8742
After 960 training steps, validation accuracy is 0.8726
After 970 training steps, validation accuracy is 0.8672
After 980 training steps, validation accuracy is 0.8744
After 990 training steps, validation accuracy is 0.8772
After 1000 training steps, test accuracy is 0.8567

我们采用另外一种方法来初始化参数

 weight1 = tf.Variable(tf.truncated_normal([INPUT_NODE, HIDDEN_NODE], stddev=1.0/28))
 biases1 = tf.Variable(tf.truncated_normal([HIDDEN_NODE], stddev=1.0/28))   
 weight2 = tf.Variable(tf.truncated_normal([HIDDEN_NODE, OUTPUT_NODE], stddev=1.0/28))
 biases2 = tf.Variable(tf.truncated_normal([OUTPUT_NODE], stddev=1.0/28))

再看下模型的表现

After 0 training steps, validation accuracy is 0.0962
After 10 training steps, validation accuracy is 0.107
After 20 training steps, validation accuracy is 0.2356
After 30 training steps, validation accuracy is 0.2572
After 40 training steps, validation accuracy is 0.447
After 50 training steps, validation accuracy is 0.5948
After 60 training steps, validation accuracy is 0.6158
After 70 training steps, validation accuracy is 0.5572
After 80 training steps, validation accuracy is 0.5966
After 90 training steps, validation accuracy is 0.7442
After 100 training steps, validation accuracy is 0.7888
......
After 800 training steps, validation accuracy is 0.9174
After 810 training steps, validation accuracy is 0.9052
After 820 training steps, validation accuracy is 0.918
After 830 training steps, validation accuracy is 0.9126
After 840 training steps, validation accuracy is 0.9184
After 850 training steps, validation accuracy is 0.9112
After 860 training steps, validation accuracy is 0.8968
After 870 training steps, validation accuracy is 0.9138
After 880 training steps, validation accuracy is 0.9138
After 890 training steps, validation accuracy is 0.8954
After 900 training steps, validation accuracy is 0.9128
After 910 training steps, validation accuracy is 0.9068
After 920 training steps, validation accuracy is 0.9164
After 930 training steps, validation accuracy is 0.9096
After 940 training steps, validation accuracy is 0.8854
After 950 training steps, validation accuracy is 0.9046
After 960 training steps, validation accuracy is 0.9126
After 970 training steps, validation accuracy is 0.9206
After 980 training steps, validation accuracy is 0.8944
After 990 training steps, validation accuracy is 0.9086
After 1000 training steps, test accuracy is 0.8992

对比相同step的准确率可以看出第二种方法的表现优于第一种。
直观地可以用图来表示这里写图片描述

蓝色线表示第一种方法,紫色线表示第二种方法。
在该数据实验中,第一种方法也可以达到一个很不错的精确度,但是它需要更多的训练次数和时间(上图并没有反映这点)。

采用第二种方法初始化权值参数除了可以提高训练的速度,有时还可以提高模型最终的准确率,这个可以在其他一些数据集中得到验证。

猜你喜欢

转载自blog.csdn.net/m0_37490039/article/details/79388014
今日推荐