tensorflow之tf.nn.static_bidirectional_rnn详解

版权声明:微信公众号:数据挖掘与机器学习进阶之路。本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/u013230189/article/details/82778023

tf.nn.static_bidirectional_rnn

Aliases:

  • tf.contrib.rnn.static_bidirectional_rnn
  • tf.nn.static_bidirectional_rnn
tf.nn.static_bidirectional_rnn(
    cell_fw,
    cell_bw,
    inputs,
    initial_state_fw=None,
    initial_state_bw=None,
    dtype=None,
    sequence_length=None,
    scope=None
)

Defined in tensorflow/python/ops/rnn.py.

See the guide: RNN and Cells (contrib) > Recurrent Neural Networks

Creates a bidirectional recurrent neural network.

Similar to the unidirectional case above (rnn) but takes input and builds independent forward and backward RNNs with the final forward and backward outputs depth-concatenated, such that the output will have the format [time][batch][cell_fw.output_size + cell_bw.output_size]. The input_size of forward and backward cell must match. The initial state for both directions is zero by default (but can be set optionally) and no intermediate states are ever returned -- the network is fully unrolled for the given (passed in) length(s) of the sequence(s) or completely unrolled if length(s) is not given.

Args:

  • cell_fw:用于前向传播的RNNCell.
  • cell_bw: 用于反向传播的RNNCell.
  • inputs: A length T list of inputs, each a tensor of shape [batch_size, input_size], or a nested tuple of such elements.(输入 数据为list类型,list中元素为Tensor,每个Tensor的shape为[batch_size, input_size],如有batch_size个文档,每个文档的单词数量为1000,每个单词的词向量维度为100,则该inputs为list(1000*tensor(batch_size*100)))
  • initial_state_fw: (optional)前向RNN的初始状态。 This must be a tensor of appropriate type and shape [batch_size, cell_fw.state_size]. If cell_fw.state_size is a tuple, this should be a tuple of tensors having shapes [batch_size, s] for s in cell_fw.state_size.。
  • initial_state_bw: (optional) Same as for initial_state_fw, but using the corresponding properties of cell_bw.
  • dtype: (optional) The data type for the initial state. Required if either of the initial states are not provided.
  • sequence_length: (optional) An int32/int64 vector, size [batch_size], containing the actual lengths for each of the sequences.
  • scope: VariableScope for the created subgraph; defaults to "bidirectional_rnn"

Returns:

A tuple (outputs, output_state_fw, output_state_bw) where: outputs is a length T list of outputs (one for each input), which are depth-concatenated forward and backward outputs. output_state_fw is the final state of the forward rnn. output_state_bw is the final state of the backward rnn.

Raises:

  • TypeError: If cell_fw or cell_bw is not an instance of RNNCell.
  • ValueError: If inputs is None or an empty list.

来源:https://tensorflow.google.cn/api_docs/python/tf/nn/static_bidirectional_rnn

import tensorflow as tf

import numpy as np



# 设置训练参数

learning_rate = 0.01

max_examples = 400000

batch_size = 128

display_step = 10 # 每间隔10次训练就展示一次训练情况



n_input = 100

n_steps = 300

fw_n_hidden = 256

bw_n_hidden = 128

n_classes = 10



x = tf.placeholder("float", [10000, n_steps, n_input])

y = tf.placeholder('float', [10000, n_classes])

weights = tf.Variable(tf.random_normal([(fw_n_hidden + bw_n_hidden), n_classes]))

biases = tf.Variable(tf.random_normal([n_classes]))



x = tf.transpose(x, [1, 0, 2])

print(x.shape) # (256, 10000, 100)

x = tf.reshape(x, [-1, n_input])

print(x.shape) # (2560000, 100)

x = tf.split(x, n_steps)

print(len(x), x[0].shape) # (10000, 512)



lstm_fw_cell = tf.contrib.rnn.BasicLSTMCell(fw_n_hidden, forget_bias=1.0) # 正向RNN,输出神经元数量为256

lstm_bw_cell = tf.contrib.rnn.BasicLSTMCell(bw_n_hidden, forget_bias=1.0) # 反向RNN,输出神经元数量为128

outputs, fw_state, bw_state = tf.contrib.rnn.static_bidirectional_rnn(lstm_fw_cell, lstm_bw_cell, x, dtype=tf.float32)

print(outputs[0].shape) # (10000, 384),384为正向RNN的输出神经元数量256和反向RNN的

print(len(outputs))#300,等于时间步的长度,一般取outputs[-1]也就是最后一步的输出进行运算输出神经元数量128之和



#lstm中隐状态c和h

print(fw_state.h.shape)#(10000, 256)

print(fw_state.c.shape)#(10000, 256)

print(bw_state.h.shape)#(10000, 128)

print(bw_state.c.shape)#(10000, 128)

猜你喜欢

转载自blog.csdn.net/u013230189/article/details/82778023