tf.layers.dropout用法

dropout:一种防止神经网络过拟合的手段。

随机的拿掉网络中的部分神经元,从而减小对W权重的依赖,以达到减小过拟合的效果。

注意:dropout只能用在训练中,测试的时候不能dropout,要用完整的网络测试哦。

tf.layers.dropout(
    inputs,
    rate=0.5,
    noise_shape=None,
    seed=None,
    training=False,
    name=None
)

Arguments:

  • inputs: Tensor input.
  • rate: The dropout rate, between 0 and 1. E.g. "rate=0.1" would drop out 10% of input units.
  •             就是你在训练的时候想拿掉多少神经元,按比例计算。0就是没有dropout,1就是整个层都没了(会报错的)。
  • noise_shape: 1D tensor of type int32 representing the shape of the binary dropout mask that will be multiplied with the input. For instance, if your inputs have shape (batch_size, timesteps, features), and you want the dropout mask to be the same for all timesteps, you can use noise_shape=[batch_size, 1, features].
  • seed: A Python integer. Used to create random seeds. Seetf.set_random_seed for behavior.
  • training: Either a Python boolean, or a TensorFlow boolean scalar tensor (e.g. a placeholder). Whether to return the output in training mode (apply dropout) or in inference mode (return the input untouched).
  • name: The name of the layer (string).

猜你喜欢

转载自blog.csdn.net/o0haidee0o/article/details/80514578
今日推荐