TensorFlow 安装指导

1.TensorFlow 简介

TensorFlow™ 是一个使用数据流图进行数值计算的开源软件库。图中的节点代表数学运算, 而图中的边则代表在这些节点之间传递的多维数组(张量)。这种灵活的架构可让您使用一个 API 将计算工作部署到桌面设备、服务器或者移动设备中的一个或多个 CPU 或 GPU。 TensorFlow 最初是由 Google 机器智能研究部门的 Google Brain 团队中的研究人员和工程师开发的,用于进行机器学习和深度神经网络研究, 但它是一个非常基础的系统,因此也可以应用于众多其他领域。

2.测试环境

本次测试在CentOS 7下进行:

[root@localhost ~]# uname -a
Linux localhost.localdomain 3.10.0-327.el7.x86_64 #1 SMP Thu Nov 19 22:10:57 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

3. YUM源配置

yum 主要功能是更方便的添加/删除/更新RPM包,自动解决包的依赖性问题,便于管理大量系统的更新问题。CentOS本身的yum源配置文件可直接使用,也可以手动配置阿里云开源镜像站或网易开源镜像站的yum源。

3.1 EPEL源

Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest Group that creates, maintains, and manages a high quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS and Scientific Linux (SL), Oracle Linux (OL).

EPEL packages are usually based on their Fedora counterparts and will never conflict with or replace packages in the base Enterprise Linux distributions. EPEL uses much of the same infrastructure as Fedora, including buildsystem, bugzilla instance, updates manager, mirror manager and more.

如果使用CentOS 7自带的yum源,可以直接使用以下命令安装EPEL源:

yum install epel-release

也可使用以下方法进行安装:

  • RHEL/CentOS 7:
yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
  • on RHEL 7 it is recommended to also enable the optional and extras repositories since EPEL packages may depend on packages from these repositories:
subscription-manager repos --enable "rhel-*-optional-rpms" --enable "rhel-*-extras-rpms"
  • RHEL/CentOS 6:
yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm

4.GPU支持(可选)

==备注:本次测试使用的是虚拟机,因无GPU相关设备,CUDA、CUDNN和TensorFlow-gpu未进行测试。==

如果需要安装 GPU 支持的 TensorFlow, 你必须确保系统里安装了正确的 CUDA sdk 和 CUDNN 版本。

4.1 安装CUDA

CUDA(Compute Unified Device Architecture),是显卡厂商NVIDIA推出的运算平台。 CUDA™是一种由NVIDIA推出的通用并行计算架构,该架构使GPU能够解决复杂的计算问题。 它包含了CUDA指令集架构(ISA)以及GPU内部的并行计算引擎。 开发人员现在可以使用C语言来为CUDA™架构编写程序,C语言是应用最广泛的一种高级编程语言。所编写出的程序于是就可以在支持CUDA™的处理器上以超高性能运行。CUDA3.0已经开始支持C++和FORTRAN。

CUDA安装:

wget http://developer.download.nvidia.com/compute/cuda/repos/rhel7/x86_64/cuda-repo-rhel7-7.0-28.x86_64.rpm
rpm -iv cuda-repo-rhel7-7.0-28.x86_64.rpm
yum search cuda
yum install cuda

你还需要设置LD_LIBRARY_PATHCUDA_HOME环境变量. 可以考虑将下面的命令添加到~/.bash_profile文件中, 这样每次登陆后自动生效。

查看CUDA安装路径:

[root@localhost tensorflow]# find / -name cuda
/root/caffe-master/.build_release/cuda
/usr/local/cuda-7.0/targets/x86_64-linux/include/thrust/system/cuda
/usr/local/cuda

本次测试CUDA安装目录为 /usr/local/cuda,则添加环境变量的内容为:

export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda/lib64"
export CUDA_HOME=/usr/local/cuda

4.2 GPU 驱动下载安装

登陆英伟达官网,下载操作系统对应版本的最新显卡驱动并安装。

4.3 CUDNNv3下载安装

NVIDIA cuDNN是用于深度神经网络的GPU加速库。它强调性能、易用性和低内存开销。NVIDIA cuDNN可以集成到更高级别的机器学习框架中,如加州大学伯克利分校的流行CAFFE软件;简单的插入式设计可以让开发人员专注于设计和实现神经网络模型,而不是调整性能,同时还可以在GPU上实现高性能现代并行计算。

tar -xvf cudnn-7-0.tgz
cp cuda/include/cudnn.h /usr/local/cuda/include/
cp cuda/lib64/libcudnn* /usr/local/cuda/lib64/

5. TensorFlow下载与安装

5.1 安装需求

TensorFlow Python API 目前支持Python 2.7 和python 3.3 以上版本。

pip版本最好在8.1及以上或pip3,否则安装tensorflow时会报错:

pip install --upgrade tensorflow
Could not find any downloads that satisfy the requirement tensorflow

支持GPU 运算的版本(仅限Linux) 需要Cuda Toolkit 7.0 和CUDNN 6.5 V2. 具体请

参考Cuda安装。

5.2 安装总述

TensorFlow 支持通过以下不同的方式安装:

  • Pip 安装: 在你的机器上安装TensorFlow,可能会同时更新之前安装的Python 包,并且影响到你机器当前可运行的Python 程序。

  • VirtualEnv 安装:在一个独立的路径下安装TensorFlow,不会影响到你机器当前运行的Python 程序。

  • Docker 安装: 在一个独立的Docker 容器中安装TensorFlow,并且不会影响到你机器上的任何其他程序。
  • Anaconda 安装:Anaconda是一个集成许多第三方科学计算库的 Python 科学计算环境,Anaconda 使用 conda 作为自己的包管理工具,同时具有自己的计算环境,类似 Virtualenv。和 Virtualenv 一样,不同 Python 工程需要的依赖包,conda 将他们存储在不同的地方。 TensorFlow 上安装的 Anaconda 不会对之前安装的 Python 包进行覆盖。
  • 源码编译安装

5.3 下载与安装

本文只介绍Pip安装和基于Virtualenv的安装方式。

5.3.1 使用pip安装TensorFlow

  • Pip安装

查看python版本,TensorFlow Python API 目前支持Python 2.7 和python 3.3 以上版本。

#查看当前系统python版本
[root@localhost ~]# python -V
Python 2.7.5

Pip 是一个用于安装和管理Python 软件包的管理系统。如果pip尚未被安装,请使用以下代码先安装pip(如果你使用的是Python 3 请安装pip3 ):

yum install python-pip python-devel   # for Python 2.7
yum install python34-pip python34-devel # for Python 3.4
  • 国内PyPI源配置

使用pip进行程序安装时,Base URL of Python Package Index 默认使用的是<https://pypi.python.org/simple>(国外网站),使用pip进行程序安装时可能会因网络原因导致报错,这里将PyPI源替换为国内清华大学的PyPI源(也可以使用163或阿里云的PyPI源,163的源更新不够及时,可能无法安装最新版本的程序,阿里云的源使用过程中会出现程序无法安装的问题)。

[root@localhost ~]# mkdir ~/.pip/
[root@localhost ~]# vim ~/.pip/pip.conf 
[global]
index-url = https://pypi.tuna.tsinghua.edu.cn/simple/
[install]
trusted-host=pypi.tuna.tsinghua.edu.cn

若不配置此文件,想临时使用国内的PyPI源,可以使用 -i URL 选项:如

pip install --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple/ pip
  • pip升级
pip install --upgrade pip # for Python 2.7
pip3 install --upgrade pip # for Python 3.n

如果升级最后出现以下提示:

mark

可以先使用pip安装matplotlib后,再进行pip升级。
pip install --upgrade matplotlib
  • TensorFlow 安装
# Linux 64-bit, CPU only, Python 2.7:
pip install --upgrade tensorflow

# Linux 64-bit, GPU enabled, Python 2.7. Requires CUDA toolkit 7.5 and CuDNN v4.
pip install --upgrade tensorflow-gpu

基于Python 3的TensorFlow安装:

# Linux 64-bit, CPU only, Python 3.4:
pip3 install --upgrade tensorflow

# Linux 64-bit, GPU enabled, Python 3.4. Requires CUDA toolkit 7.5 and CuDNN v4.
pip3 install --upgrade tensorflow-gpu

至此你可以测试安装是否成功。

5.3.2 基于Virtualenv 安装TensorFlow

推荐使用 virtualenv 创建一个隔离的容器, 来安装 TensorFlow,这样做能使排查安装问题变得更容易,Virtualenv 是一个管理在不同位置存放和调用Python 项目所需依赖库的工具,TensorFlow的Virtualenv 安装不会覆盖先前已安装的TensorFlow Python 依赖包,不会影响到你机器当前运行的python程序。

  • 安装pip 和Virtualenv
# 在 Linux 上,python2.7
yum install python-pip python-devel python-virtualenv
pip install --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple/ pip
#python34
yum install python34-pip python34-devel python-virtualenv
pip3 install --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple/ pip
  • 建立一个Virtualenv 环境
在~/tensorflow路径下建立一个Virtualenv 环境:
#创建python2的virtualenv环境
virtualenv --system-site-packages ~/tensorflow
或
virtualenv --system-site-packages --python=/usr/bin/python2.7 ~/tensorflow
#创建python34的virtualenv环境
virtualenv --system-site-packages --python=/usr/bin/python3.4 ~/tensorflow
  • 激活这个Virtualenv 环境,并且在此环境下安装TensorFlow
激活Virtualenv
source ~/tensorflow/bin/activate  # 如果使用 bash
source ~/tensorflow/bin/activate.csh  # 如果使用 csh
(tensorflow)$  # 终端提示符应该发生变化

在 virtualenv 内, 安装 TensorFlow:

虚拟环境中未找到如何使用配置文件来定义国内的PyPI源,这里使用临时指定PyPI源的方法进行程序安装与升级。

#python2.7
(tensorflow) # pip install --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple/ pip
#python34
(tensorflow) # pip3 install --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple/ pip
  • 安装TensorFlow
# Linux 64-bit, CPU only, Python 2.7:
(tensorflow)$ pip install --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple/ tensorflow

# Linux 64-bit, GPU enabled, Python 2.7. Requires CUDA toolkit 7.5 and CuDNN v4.
(tensorflow)$ pip install --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple/ tensorflow-gpu
基于Python 3的TensorFlow安装:
# Linux 64-bit, CPU only, Python 3.4:
(tensorflow)$ pip3 install --upgrade  -i https://pypi.tuna.tsinghua.edu.cn/simple/ tensorflow

# Linux 64-bit, GPU enabled, Python 3.4. Requires CUDA toolkit 7.5 and CuDNN v4.
(tensorflow)$ pip3 install --upgrade  -i https://pypi.tuna.tsinghua.edu.cn/simple/ tensorflow-gpu

至此TensorFlow安装完成,此时,可以执行测试程序。

安装完成之后,每次你需要使用TensorFlow 之前必须激活这个Virtualenv 环境, 当您无需使用TensorFlow 时,取消激活该环境。

激活Virtualenv
source ~/tensorflow/bin/activate  # 如果使用 bash
source ~/tensorflow/bin/activate.csh  # 如果使用 csh
(tensorflow)$  # 终端提示符应该发生变化

# 当使用完 TensorFlow
(tensorflow)$ deactivate  # 停用 virtualenv
$  # 你的命令提示符会恢复原样

6.TensorFlow测试

6.1 测试TensorFlow是否安装成功

[root@localhost ~]# python
Python 2.7.5 (default, Aug  4 2017, 00:39:18) 
[GCC 4.8.5 20150623 (Red Hat 4.8.5-16)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> hello = tf.constant('Hello, TensorFlow!')
>>> sess = tf.Session()
>>> sess.run(hello)
'Hello, TensorFlow!'
>>> a = tf.constant(10)
>>> b = tf.constant(32)
>>> sess.run(a + b)
42
>>> sess.close()
>>>
#使用pip安装的tensorflow执行命令时可能会出现以下提醒信息,说是compile tensorflow using SSE4.1, SSE4.2, and AVX可以提高CPU性能,这个不影响使用, 这个问题的出现主要是和tensorflow的安装方式有关系,使用pip安装就会出现对代码编译优化的问题,使得你电脑有SSE4.1等命令,却无法调用来加速训练,如果想消除该提醒信息,需要使用编译源码方法安装tensorflow。
>>> sess = tf.Session()
2018-04-09 20:34:10.185254: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2018-04-09 20:34:10.185305: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2018-04-09 20:34:10.185320: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.

6.2 TensorFlow mnist测试

备注:TensorFlow mnist测试详细信息可以参考第七章节中的TensorFlow中文教程

TensorFlow是一个非常强大的用来做大规模数值计算的库。其所擅长的任务之一就是实现以及训练深度神经网络。

当我们开始学习编程的时候,第一件事往往是学习打印"Hello World"。就好比编程入门有Hello World,机器学习入门有MNIST。MNIST是一个入门级的计算机视觉数据集,它包含各种手写数字图片,它也包含每一张图片对应的标签,告诉我们这个是数字几。

6.2.1 确认tensorflow安装目录

[root@localhost kk]# find / -name tensorflow
/usr/lib/python2.7/site-packages/tensorflow
/usr/lib/python2.7/site-packages/tensorflow/include/tensorflow

6.2.2 mnist测试一

我们将训练一个机器学习模型用于预测图片里面的数字。

[root@localhost]# cd /usr/lib/python2.7/site-packages/tensorflow/examples/tutorials/mnist/
[root@localhost mnist]# ls
__init__.py  __init__.pyc  input_data.py  input_data.pyc  mnist.py  mnist.pyc
[root@localhost mnist]# python
Python 2.7.5 (default, Aug  4 2017, 00:39:18) 
[GCC 4.8.5 20150623 (Red Hat 4.8.5-16)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow.examples.tutorials.mnist.input_data as input_data
>>> mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.
Extracting MNIST_data/train-images-idx3-ubyte.gz
Successfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Successfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Successfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
>>> 

执行以上命令后,会下载测试数据,在MNIST_data文件夹下
[root@localhost mnist]# ls
__init__.py  __init__.pyc  input_data.py  input_data.pyc  MNIST_data  mnist.py  mnist.pyc
[root@localhost mnist]# cd MNIST_data/
[root@localhost MNIST_data]# ls
t10k-images-idx3-ubyte.gz  t10k-labels-idx1-ubyte.gz  train-images-idx3-ubyte.gz  train-labels-idx1-ubyte.gz

如果在执行import xxx input_data 语句时报以下错误:

/usr/lib64/python2.7/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters

原因是h5py 和 numpy 版本冲突,h5py 官方已修复合并到 master 分支,但是还没发新版,在发版之前可以用降级 numpy 的方法跳过这个问题。降级命令如下:
pip install -U -i  https://pypi.tuna.tsinghua.edu.cn/simple/ numpy==1.13.0

错误解决后,可以继续进行测试。

完整的测试步骤如下:

[root@localhost mnist]# python
Python 2.7.5 (default, Aug  4 2017, 00:39:18) 
[GCC 4.8.5 20150623 (Red Hat 4.8.5-16)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow.examples.tutorials.mnist.input_data input_data
>>> mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
>>> import tensorflow as tf
>>> x = tf.placeholder(tf.float32, [None, 784])
>>> W = tf.Variable(tf.zeros([784,10]))
>>> b = tf.Variable(tf.zeros([10]))
>>> y = tf.nn.softmax(tf.matmul(x,W) + b)
>>> y_ = tf.placeholder("float", [None,10])
>>> cross_entropy = -tf.reduce_sum(y_*tf.log(y))
>>> train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
>>> init = tf.initialize_all_variables()
#提示新版本一些函数在更新过程发生了改变:
WARNING:tensorflow:From /usr/lib/python2.7/site-packages/tensorflow/python/util/tf_should_use.py:175: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.global_variables_initializer` instead.

>>> sess = tf.Session()
#如果想消除该提醒信息,需要使用编译源码方法安装tensorflow。
2018-04-09 20:38:14.754273: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2018-04-09 20:38:14.754338: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2018-04-09 20:38:14.754347: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
>>> sess.run(init)
>>> for i in range(1000):
...   batch_xs, batch_ys = mnist.train.next_batch(100)
...   sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
... 
>>> correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
>>> accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
>>> print (sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
0.915
>>> sess.close()

最终的测试结果为91.5%,这个结果并不太好。事实上,这个结果是很差的。这是因为我们仅仅使用了一个非常简单的模型。不过,做一些小小的改进,我们就可以得到97%的正确率。最好的模型甚至可以获得超过99.7%的准确率!可以参考mnist测试二。

6.2.2 mnist测试二

在本次测试中,我们将学到构建一个TensorFlow模型的基本步骤,并将通过这些步骤为MNIST构建一个深度卷积神经网络。

备注:测试二肯能同样会出现测试一中的报错,按照测试一步骤进行处理即可。

测试二测试代码:

注意执行时不要忘记代码前的空格,否则可能执行失败。

import input_data
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
import tensorflow as tf
sess = tf.InteractiveSession()
x = tf.placeholder("float", shape=[None, 784])
y_ = tf.placeholder("float", shape=[None, 10])
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
sess.run(tf.initialize_all_variables())
y = tf.nn.softmax(tf.matmul(x,W) + b)
cross_entropy = -tf.reduce_sum(y_*tf.log(y))
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)

for i in range(1000):
  batch = mnist.train.next_batch(50)
  train_step.run(feed_dict={x: batch[0], y_: batch[1]})

correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print(accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels}))

def weight_variable(shape):
  initial = tf.truncated_normal(shape, stddev=0.1)
  return tf.Variable(initial)

def bias_variable(shape):
  initial = tf.constant(0.1, shape=shape)
  return tf.Variable(initial)

def conv2d(x, W):
  return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')

def max_pool_2x2(x):
  return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
                        strides=[1, 2, 2, 1], padding='SAME')

W_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])

x_image = tf.reshape(x, [-1,28,28,1])
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)

W_conv2 = weight_variable([5, 5, 32, 64])
b_conv2 = bias_variable([64])

h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)

W_fc1 = weight_variable([7 * 7 * 64, 1024])
b_fc1 = bias_variable([1024])

h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)

keep_prob = tf.placeholder("float")
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)

W_fc2 = weight_variable([1024, 10])
b_fc2 = bias_variable([10])

y_conv=tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)

cross_entropy = -tf.reduce_sum(y_*tf.log(y_conv))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
sess.run(tf.initialize_all_variables())
for i in range(5000):
  batch = mnist.train.next_batch(50)
  if i%100 == 0:
    train_accuracy = accuracy.eval(feed_dict={
        x:batch[0], y_: batch[1], keep_prob: 1.0})
    print("step %d, training accuracy %g"%(i, train_accuracy))
  train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})

print("test accuracy %g"%accuracy.eval(feed_dict={
    x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))

测试二完整执行步骤:

注意执行时不要忘记代码前的空格,否则可能执行失败。

cd /usr/lib/python2.7/site-packages/tensorflow/examples/tutorials/mnist/
[root@localhost mnist]# python
Python 2.7.5 (default, Aug  4 2017, 00:39:18) 
[GCC 4.8.5 20150623 (Red Hat 4.8.5-16)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import input_data
>>> mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
>>> import tensorflow as tf
>>> sess = tf.InteractiveSession()
>>> x = tf.placeholder("float", shape=[None, 784])
>>> y_ = tf.placeholder("float", shape=[None, 10])
>>> W = tf.Variable(tf.zeros([784,10]))
>>> b = tf.Variable(tf.zeros([10]))
>>> sess.run(tf.initialize_all_variables())
WARNING:tensorflow:From /usr/local/python3.6/lib/python3.6/site-packages/tensorflow/python/util/tf_should_use.py:118: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.global_variables_initializer` instead.
>>> y = tf.nn.softmax(tf.matmul(x,W) + b)
>>> cross_entropy = -tf.reduce_sum(y_*tf.log(y))
>>> train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
>>> 
>>> for i in range(1000):
...   batch = mnist.train.next_batch(50)
...   train_step.run(feed_dict={x: batch[0], y_: batch[1]})
... 
>>> correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
>>> accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
>>> print(accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
0.912
>>> 
>>> def weight_variable(shape):
...   initial = tf.truncated_normal(shape, stddev=0.1)
...   return tf.Variable(initial)
... 
>>> def bias_variable(shape):
...   initial = tf.constant(0.1, shape=shape)
...   return tf.Variable(initial)
... 
>>> def conv2d(x, W):
...   return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
... 
>>> def max_pool_2x2(x):
...   return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
...                         strides=[1, 2, 2, 1], padding='SAME')
... 
>>> W_conv1 = weight_variable([5, 5, 1, 32])
>>> b_conv1 = bias_variable([32])
>>> 
>>> x_image = tf.reshape(x, [-1,28,28,1])
>>> h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
>>> h_pool1 = max_pool_2x2(h_conv1)
>>> 
>>> W_conv2 = weight_variable([5, 5, 32, 64])
>>> b_conv2 = bias_variable([64])
>>> 
>>> h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
>>> h_pool2 = max_pool_2x2(h_conv2)
>>> 
>>> W_fc1 = weight_variable([7 * 7 * 64, 1024])
>>> b_fc1 = bias_variable([1024])
>>> 
>>> h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
>>> h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
>>> 
>>> keep_prob = tf.placeholder("float")
>>> h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
>>> 
>>> W_fc2 = weight_variable([1024, 10])
>>> b_fc2 = bias_variable([10])
>>> 
>>> y_conv=tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)
>>> 
>>> cross_entropy = -tf.reduce_sum(y_*tf.log(y_conv))
>>> train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
>>> correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))
>>> accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
>>> sess.run(tf.initialize_all_variables())
#下面设置的测试范围为10000,范围越大,精度会越高。但范围太大时可能会导致最后的print语句执行失败(虚拟机性能问题)
>>> for i in range(10000):
...   batch = mnist.train.next_batch(50)
...   if i%100 == 0:
...     train_accuracy = accuracy.eval(feed_dict={
...         x:batch[0], y_: batch[1], keep_prob: 1.0})
...     print("step %d, training accuracy %g"%(i, train_accuracy))
...   train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
...
step 0, training accuracy 0.92
step 100, training accuracy 0.86
step 200, training accuracy 0.94
step 300, training accuracy 0.92
step 400, training accuracy 0.9
step 500, training accuracy 0.94
step 600, training accuracy 0.94
...
...
step 9400, training accuracy 0.92
step 9500, training accuracy 0.9
step 9600, training accuracy 0.98
step 9700, training accuracy 0.92
step 9800, training accuracy 0.98
step 9900, training accuracy 0.96
>>> 
>>> print ("test accuracy %g"%accuracy.eval(feed_dict={
...     x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))
test accuracy 0.9195
>>>

7.参考文档

  1. TensorFlow中文教程

  2. tensorflow安装测试运行常见问题

猜你喜欢

转载自blog.51cto.com/zaa47/2121982