From: https://github.com/machinelearningmindset/TensorFlow-Course
Tensorflow install from source
Link to install the original
code is installed
installed Bazel
install CUDA
Nidia document
Mounted TensorFlow
available on _. Recommended code is installed from a source, because the user can construct a desired TensorFlow binary file for a particular architecture. It enriches the TensoFlow, with a better system compatibility, run faster. Installation from source TensorFlow can be installed from the source code. The official TensorFlow concise explanation. however. When we have completed the installation, there are few things could become very important. We try to avoid any confusion gradually cast. The following must be considered part of the written order.
Suppose you want to use "GPU support" to install TensorFlow in the "Ubuntu". Choose Python2.7
to install.
Note See this' ' YouTube ' '' to get an intuitive explanation.
Preparing the Environment
You should do the following sequence:
- TensorFlow Python dependencies installed
- Bazel installation
- TensorFlow GPU prerequisite set
TensorFlow Python dependencies installed
To install the required dependencies, the following commands must be executed in the terminal:
sudo apt-get install python-numpy python-dev python-pip python-wheel python-virtualenv
sudo apt-get install python3-numpy python3-dev python3-pip python3-wheel python3-virtualenv
The second line is python3
installed.
Bazel installation
See Bazel installation .
警告:
Bazel installation may change the GPU supported kernels! After that, you may need to refresh the GPU install or update it, otherwise, the following error may occur when evaluating TensorFlow installation:
kernel version X does not match DSO version Y -- cannot find working devices in this configuration
For solving that error you may need to purge all NVIDIA drivers and install or update them again. Please refer to `CUDA Installation`_ for further detail.
TensorFlow GPU prerequisite set
You must meet the following requirements:
- Cuda Toolkit and associated drivers are NVIDIA (recommended version 8.0). Installed in the
CUDA Installation
explanation _ in. - cuDNN library (version 5.1 is recommended). For more detailed information, see the "NIDIA document."
- Use the following command to install
libcupti-dev
:sudo apt-get install libcupti-dev
Create a virtual environment (optional)
Suppose you want to install TensorFlow in "python virtual environment". First of all, we need to create a directory that contains all environments. Can be accomplished by executing the following command in the terminal:
sudo mkdir ~/virtualenvs
Now by using the virtualenv
command, you can create a virtual environment:
sudo virtualenv --system-site-packages ~/virtualenvs/tensorflow
Environment Activation
So far, it has been created called * tensorflow * virtual environments. For context activation, you must do the following:
source ~/virtualenvs/tensorflow/bin/activate
However, the command is too cumbersome!
Aliases
The solution is to use an alias to make life easier! Let's execute the following command:
echo 'alias tensorflow="source $HOME/virtualenvs/tensorflow/bin/activate" ' >> ~/.bash_aliases
bash
After running a command, off again and open a terminal. Now, by running the following simple script to activate tensorflow environment.
tensorflow
Check~/.bash_aliases
to double-check, we use the sudo gedit~ / .bash_aliases
command check from the terminal 〜/ .bash_aliases
. The file should contain the following script:
alias tensorflow="source $HO~/virtualenvs/tensorflow/bin/activate"
Check.bashrc
Also, let us use the sudo gedit~ / .bashrc
command to check .bashrc
a shell script. It should contain the following:
if [ -f ~/.bash_aliases ]; then
. ~/.bash_aliases
fi
Configuration Installation
First, you must clone Tensorflow repository:
git clone https://github.com/tensorflow/tensorflow
When you are ready environment, you must configure the installation. Configuration of the "signs" are important because they determine TensorFlow installation performance and compatibility! First, we must go to the root directory TensorFlow:
cd tensorflow # cd to the cloned directory
Legends and will configure the environment:
$ ./configure
Please specify the location of python. [Default is /usr/bin/python]: /usr/bin/python2.7
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]:
Do you wish to use jemalloc as the malloc implementation? [Y/n] Y
jemalloc enabled
Do you wish to build TensorFlow with Google Cloud Platform support? [y/N] N
No Google Cloud Platform support will be enabled for TensorFlow
Do you wish to build TensorFlow with Hadoop File System support? [y/N] N
No Hadoop File System support will be enabled for TensorFlow
Do you wish to build TensorFlow with the XLA just-in-time compiler (experimental)? [y/N] N
No XLA JIT support will be enabled for TensorFlow
Found possible Python library paths:
/usr/local/lib/python2.7/dist-packages
/usr/lib/python2.7/dist-packages
Please input the desired Python library path to use. Default is [/usr/local/lib/python2.7/dist-packages]
Using python library path: /usr/local/lib/python2.7/dist-packages
Do you wish to build TensorFlow with OpenCL support? [y/N] N
No OpenCL support will be enabled for TensorFlow
Do you wish to build TensorFlow with CUDA support? [y/N] Y
CUDA support will be enabled for TensorFlow
Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]:
Please specify the Cuda SDK version you want to use, e.g. 7.0. [Leave empty to use system default]: 8.0
Please specify the location where CUDA 8.0 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
Please specify the cuDNN version you want to use. [Leave empty to use system default]: 5.1.10
Please specify the location where cuDNN 5 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
Please specify a list of comma-separated Cuda compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size.
[Default is: "3.5,5.2"]: "5.2"
note:
- You must use the / usr / local / cuda relevant documents accurately determine cuDNN version
- Computing power associated with the system architecture "Available GPU model." For example,
Geforce GTX Titan X
GPU computing power of 5.2. - If you need to reconfigure recommended
bazel clean
.
caveat:
- If you need to install TwnsorFlow in a virtual environment, you must first activate the environment, then run `./ configure`` script.
Test Bazel (optional)
We can use Bazel
to run tests to make sure everything works:
./configure
bazel test ...
Construction of .whl package
After configuration settings needed to build the packet pip Bazel.
To build support for the GPU TensorFlow package, execute the following command:
bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package
bazel build
Build a command script named build_pip_package of. Run the following script to build .whl file in ~ / tensorflow_package directory:
bazel-bin/tensorflow/tools/pip_package/build_pip_package ~/tensorflow_package
Pip installation package
There are two types of installations. Use root system and virtual environments installation of the machine installation.
This installation
The following command installs the package Bazel build pip created:
sudo pip install ~/tensorflow_package/file_name.whl
Use virtual environment
First, you must activate the environment. Since we have the environment alias is defined as "tensorflow", perform simple "tensorflow" command through the terminal, the environment will be activated. Then, like the first part, we do the following:
pip install ~/tensorflow_package/file_name.whl
Warning :
- By using a virtual environment installation method should not be used sudo command, because if we use sudo, it points to the native system packets instead of packages available in the virtual environment.
- Because
sudo mkdir~ / virtualenvs
for creating a virtual environment, the use ofpip install
returnpermission error
. In this case, you must use thesudo chmod -R 777~ / virtualenvs
command to change the root directory of the environmental authority.
Verifying the Installation
In the terminal, you must run the following script correctly ( 在主目录
in), no errors, no warnings are the best:
python
>> import tensorflow as tf
>> hello = tf.constant('Hello, TensorFlow!')
>> sess = tf.Session()
>> print(sess.run(hello))
Common Errors
Report block different errors to compile and run the TensorFlow.
支持的内核版本之间不匹配:
This document this error mentioned earlier. Report naive solution is to re-install the CUDA driver.ImportError:无法导入名称pywrap_tensorflow:
When loaded tensorflow Python library from the wrong directory, this error usually occurs, that is not user-installed version in the root directory. The first step is to make sure we are in the system root directory so that the proper use of python libraries. So basically we can open a new terminal and test TensorFlow installation again.ImportError:没有名为packaging.version的模块“:
And it is likelypip
related to the installation. Usepython -m pip install -U pip
orsudo python重新安装它-m pip install -U pip
can solve it!
Summary
In this tutorial, we describe how to install TensorFlow from the source, so you can better compatible with the system configuration. Also we studied the Python virtual environment installation, to TensorFlow environment separated from other environments. You can use Python Conda environment and virtual environment, which will be explained in a separate post. In any case, the pre-constructed from the source binaries installed TensorFlow than can be provided TensorFlow run faster, although it increases the complexity of the installation process.