Tensorflow搭建第一个多层感知机模型(mnist手写体数字分类)

版权声明:站在巨人的肩膀上学习。 https://blog.csdn.net/zgcr654321/article/details/83963679

多层感知机在单层神经网络的基础上引入了一到多个隐藏层(hidden layer)。隐藏层位于输入层和输出层之间。 

如:

上图的多层感知机结构中输入和输出个数分别为5和2,中间的隐藏层中包含了3个隐藏单元(hidden unit)。因为输入层不涉及计算,图中多层感知机的层数为2。隐藏层中的神经元和输入层中各个输入完全连接,输出层中的神经元和隐藏层中的各个神经元也完全连接。因此,多层感知机中的隐藏层和输出层都是全连接层

多层感知机只通过乘以权重参数的线性变换和一次通过加上偏差参数的平移,而没有非线性变换的过程。

如果初始化后每个隐藏单元的参数都相同,那么在模型训练时每个隐藏单元将根据相同输入计算出相同的值。接下来输出层也将从各个隐藏单元拿到完全一样的值。在迭代每个隐藏单元的参数时,这些参数在每轮迭代的值都相同。那么,由于每个隐藏单元拥有相同激活函数和相同参数,所有隐藏单元将继续根据下一次迭代时的相同输入计算出相同的值。如此周而复始。这种情况下,无论隐藏单元个数有多大,隐藏层本质上只有1个隐藏单元在发挥作用因此,我们通常会随机初始化神经网络的模型参数

Tensorflow建立第一个多层感知机模型:

输入层有784个输入数据,隐藏层规定300个神经元,输出层10个神经元。

代码如下:

import tensorflow as tf
import os
from tensorflow.examples.tutorials.mnist import input_data

os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
# 准备数据库(MNIST库,这是一个手写体数据库)
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)

# 输入数据的维度
in_units = 784
# 隐藏层神经元个数
h1_units = 300

# 隐藏层参数
w1 = tf.Variable(tf.truncated_normal([in_units, h1_units], stddev=0.1))
b1 = tf.Variable(tf.zeros([h1_units]))

# 输出层参数
w2 = tf.Variable(tf.zeros([h1_units, 10]))
b2 = tf.Variable(tf.zeros([10]))

X = tf.placeholder(tf.float32, [None, in_units])
Y = tf.placeholder(tf.float32, [None, 10])
# dropout的保留比率
keep_prob = tf.placeholder(tf.float32)

# 隐藏层
hidden1 = tf.nn.relu(tf.matmul(X, w1) + b1)
hidden1_drop = tf.nn.dropout(hidden1, keep_prob)
# 输出层
y = tf.nn.softmax(tf.matmul(hidden1, w2) + b2)

# cost函数和优化器
cross_entropy = tf.reduce_mean(-tf.reduce_sum(Y * tf.log(y), reduction_indices=[1]))
optimizer = tf.train.AdagradOptimizer(0.3).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(Y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

with tf.Session() as sess:
	sess.run(tf.global_variables_initializer())
	for i in range(3000):
		batch_xs, batch_ys = mnist.train.next_batch(100)
		_, loss = sess.run([optimizer, cross_entropy], feed_dict={X: batch_xs, Y: batch_ys, keep_prob: 0.75})
		if i % 100 == 0:
			test_batch_x, test_batch_y = mnist.train.next_batch(1000)
			acc = sess.run(accuracy, feed_dict={X: test_batch_x, Y: test_batch_y, keep_prob: 1.0})
			print("iteration:{} loss:{} acc:{} ".format(i, loss, acc))

运行结果如下:

iteration:0 loss:2.3025851249694824 acc:0.2930000126361847 
iteration:100 loss:0.24105916917324066 acc:0.8939999938011169 
iteration:200 loss:0.09633474051952362 acc:0.9490000009536743 
iteration:300 loss:0.2619999051094055 acc:0.9459999799728394 
iteration:400 loss:0.1156739741563797 acc:0.949999988079071 
iteration:500 loss:0.11924201995134354 acc:0.9639999866485596 
iteration:600 loss:0.1646098643541336 acc:0.9739999771118164 
iteration:700 loss:0.1607067734003067 acc:0.9739999771118164 
iteration:800 loss:0.13139671087265015 acc:0.9649999737739563 
iteration:900 loss:0.07568924129009247 acc:0.9779999852180481 
iteration:1000 loss:0.018850363790988922 acc:0.9760000109672546 
iteration:1100 loss:0.07222308963537216 acc:0.9769999980926514 
iteration:1200 loss:0.05168519914150238 acc:0.9819999933242798 
iteration:1300 loss:0.12982039153575897 acc:0.9800000190734863 
iteration:1400 loss:0.12051058560609818 acc:0.984000027179718 
iteration:1500 loss:0.06544838100671768 acc:0.9879999756813049 
iteration:1600 loss:0.01935722306370735 acc:0.9900000095367432 
iteration:1700 loss:0.03924274072051048 acc:0.9890000224113464 
iteration:1800 loss:0.031458571553230286 acc:0.9819999933242798 
iteration:1900 loss:0.0364888459444046 acc:0.9860000014305115 
iteration:2000 loss:0.031533133238554 acc:0.9829999804496765 
iteration:2100 loss:0.02344430424273014 acc:0.9929999709129333 
iteration:2200 loss:0.051968932151794434 acc:0.9869999885559082 
iteration:2300 loss:0.05508466809988022 acc:0.9890000224113464 
iteration:2400 loss:0.03492842614650726 acc:0.9940000176429749 
iteration:2500 loss:0.018993325531482697 acc:0.9940000176429749 
iteration:2600 loss:0.027935774996876717 acc:0.9929999709129333 
iteration:2700 loss:0.0398138128221035 acc:0.9909999966621399 
iteration:2800 loss:0.01869993284344673 acc:0.9940000176429749 
iteration:2900 loss:0.043013833463191986 acc:0.9909999966621399 

Process finished with exit code 0

猜你喜欢

转载自blog.csdn.net/zgcr654321/article/details/83963679
今日推荐