TensorFlow 多层感知器

本文主要介绍使用TensorFlow对多层感知器(MLP)进行编程,其他内容参考:TensorFlow 学习目录

目录

一、多层感知器MLP


一、多层感知器MLP

多层感知器不同于CNN,CNN提取的是局部特征,而且CNN训练的并不是feature,而是feature的特征过滤器,也就是Filter(Kernel),多层感知器隐藏层中的每一个神经元是上一层的全部神经元的权重配比,并不具有局部性,而是更加注重整体,所以相比于CNN,MLP的参数量是非常庞大的。在多层感知器这种结构下,H-ELM、ML-ELM对其进行训练的效果是比Back Propagation算法进行不断地迭代训练好很多的,而且更快,只要调一次参数,而BP算法要调到收敛,并需要注意过拟合。

import tensorflow as tf
import numpy as np
import get_Dataset

x_train, y_train, x_test, y_test = get_Dataset.get_Dataset(name='mnist')

inputs = tf.placeholder(dtype=tf.float32, shape=[None, 784], name='inputs')
labels = tf.placeholder(dtype=tf.float32, shape=[None, 10], name='labels')
pre_labels = tf.placeholder(dtype=tf.int64, shape=[None], name='pre_labels')

def fully_connected(x, size):
    len_x = int(x.get_shape()[1])
    w = tf.Variable(tf.random_normal(shape=[len_x, size], stddev=0.1), dtype=tf.float32)
    b = tf.Variable(tf.constant(0.0, dtype=tf.float32, shape=[size]))
    z = tf.nn.relu(tf.nn.xw_plus_b(x, w, b))
    return z


f1 = fully_connected(inputs, 520)
f2 = fully_connected(f1, 200)
logits = fully_connected(f2, 10)


loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels))
optimizer = tf.train.AdamOptimizer(1e-4).minimize(loss)

correct_pred = tf.equal(tf.argmax(logits, 1), pre_labels)
acc = tf.reduce_mean(tf.cast(correct_pred, dtype=tf.float32))


epochs = 50
batch_size = 40
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())

    for epoch in range(epochs):
        for batch in range(int(x_train.shape[0]/batch_size)):
            batchx = x_train[batch*batch_size: (batch+1)*batch_size]
            batchy = y_train[batch*batch_size: (batch+1)*batch_size]

            feed_dict={
                inputs: batchx,
                labels: batchy
            }
            _ = sess.run(optimizer, feed_dict=feed_dict)

        Acc = sess.run(acc, feed_dict={inputs: x_test, pre_labels: y_test})
        print ("Epoch ", epoch+1, "ACC = ", Acc)

输出:

Epoch  1 ACC =  0.9402
Epoch  2 ACC =  0.9577
Epoch  3 ACC =  0.9649
Epoch  4 ACC =  0.9682
Epoch  5 ACC =  0.9709
Epoch  6 ACC =  0.972
Epoch  7 ACC =  0.9738
Epoch  8 ACC =  0.9756
Epoch  9 ACC =  0.9757
Epoch  10 ACC =  0.9762
Epoch  11 ACC =  0.9766
Epoch  12 ACC =  0.9759
Epoch  13 ACC =  0.9768
Epoch  14 ACC =  0.9765
Epoch  15 ACC =  0.9772
Epoch  16 ACC =  0.9773
Epoch  17 ACC =  0.9782
Epoch  18 ACC =  0.9773
Epoch  19 ACC =  0.9798
Epoch  20 ACC =  0.9781
Epoch  21 ACC =  0.9795
Epoch  22 ACC =  0.9745
Epoch  23 ACC =  0.9813
Epoch  24 ACC =  0.9806
Epoch  25 ACC =  0.9804
Epoch  26 ACC =  0.9804
Epoch  27 ACC =  0.9786
Epoch  28 ACC =  0.9811
Epoch  29 ACC =  0.9811
Epoch  30 ACC =  0.9808
Epoch  31 ACC =  0.981
Epoch  32 ACC =  0.9809
Epoch  33 ACC =  0.9807
Epoch  34 ACC =  0.9807
Epoch  35 ACC =  0.982
Epoch  36 ACC =  0.9819
Epoch  37 ACC =  0.9816
Epoch  38 ACC =  0.9817
Epoch  39 ACC =  0.9809
Epoch  40 ACC =  0.9749
Epoch  41 ACC =  0.981
Epoch  42 ACC =  0.981
Epoch  43 ACC =  0.9811
Epoch  44 ACC =  0.9815
Epoch  45 ACC =  0.9813
Epoch  46 ACC =  0.9808
Epoch  47 ACC =  0.9811
Epoch  48 ACC =  0.9817
Epoch  49 ACC =  0.9819
Epoch  50 ACC =  0.982
发布了331 篇原创文章 · 获赞 135 · 访问量 11万+

猜你喜欢

转载自blog.csdn.net/Triple_WDF/article/details/103343442