ubuntu18.04+CUDA10+CUDNN+tensorflow_gpu installation guide医疗神经网络工作站搭建

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/Arctic_Beacon/article/details/84976493

I just installed the Ubuntu18.04, and  have not install the Chinese font yet. So the following is edited by English.

Step 1

Get CUDA from official site

CUDA 10

Installation instructions:

sudo dpkg -i cuda-repo-ubuntu1804_10.0.130-1_amd64.deb
sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub
sudo apt-get update
sudo apt-get install cuda

Reboot

nvidia-smi   

# you should see a list of gpus printed    
# if not, the previous steps failed.  

Add enviroment variable

get to the HOME folder, ctrl+h to show hidden files. Double click to open the ~/.bashrc, then add flowing lines to to end of ~/.bashrc:

export PATH=/usr/local/cuda/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH

reload bashrc

source ~/.bashrc
sudo gedit /etc/profile

add the line:

export PATH=/usr/local/cuda/bin:$PATH

sudo gedit /etc/ld.so.conf.d/cuda.conf

add the line:

/usr/local/cuda/lib64

source /etc/profile
sudo ldconfig
sudo gedit ~/.bash_profile

add the lines

export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH

source ~/.bash_profile
sudo ldconfig

Varifying

cd /usr/local/cuda/samples
sudo make all -j4
cd /usr/local/cuda/samples/bin/x86_64/linux/release
./deviceQuery

Step 2

get a Debian File of cudnn from official site(membership required)

  1. Navigate to your <cudnnpath> directory containing cuDNN Debian file.
  2. Install the runtime library, for example:
  3. Install the developer library, for example:
  4. Install the code samples and the cuDNN Library User Guide, for example:
sudo dpkg -i libcudnn7_7.3.1.20-1+cuda10.0_amd64.deb
sudo dpkg -i libcudnn7-dev_7.3.1.20-1+cuda10.0_amd64.deb
sudo dpkg -i libcudnn7-doc_7.3.1.20-1+cuda10.0_amd64.deb

Verifying

To verify that cuDNN is installed and is running properly, compile the mnistCUDNN sample located in the /usr/src/cudnn_samples_v7 directory in the debian file.

  1. Copy the cuDNN sample to a writable path.
    $cp -r /usr/src/cudnn_samples_v7/ $HOME
  2. Go to the writable path.
    $ cd  $HOME/cudnn_samples_v7/mnistCUDNN
  3. Compile the mnistCUDNN sample.
    $make clean && make
  4. Run the mnistCUDNN sample.
    $ ./mnistCUDNN
    If cuDNN is properly installed and running on your Linux system, you will see a message similar to the following:
    Test passed!

Step 3

Navigator to your directory of anaconda containing SHELL file.

bash Anaconda3-5.3.0-Linux-x86_64.sh

$ conda update -n base -c defaults conda

Step 4

Add the mirrors of conda files hosted by Tsinghua and USTC.

conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/
conda config --set show_channel_urls yes 
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge/ 
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/msys2/
conda config --add channels https://mirrors.ustc.edu.cn/anaconda/pkgs/free/
conda config --add channels https://mirrors.ustc.edu.cn/anaconda/pkgs/main/
conda config --set show_channel_urls yes
conda config --add channels https://mirrors.ustc.edu.cn/anaconda/cloud/conda-forge/
conda config --add channels https://mirrors.ustc.edu.cn/anaconda/cloud/msys2/
conda config --add channels https://mirrors.ustc.edu.cn/anaconda/cloud/bioconda/
conda config --add channels https://mirrors.ustc.edu.cn/anaconda/cloud/menpo/

Creat an environment and install

conda create -n tensorflow
source activate tensorflow 
conda install tensorflow-gpu

Step 5

Install spyder

source activate tensorflow
conda install spyder

Step 6

Testing

import tensorflow as tf

a=tf.constant(1)
b=tf.Variable(2)
c = a+b

init = tf.global_variables_initializer()
with tf.Session() as sess:
    
    sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True, log_device_placement=True))
    sess.run(init)
    c=sess.run(c)
    
    

You should get the flowing output, otherwise you failed.

Device mapping:
/job:localhost/replica:0/task:0/device:XLA_CPU:0 -> device: XLA_CPU device
/job:localhost/replica:0/task:0/device:XLA_GPU:0 -> device: XLA_GPU device
/job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1
Variable: (VariableV2): /job:localhost/replica:0/task:0/device:CPU:0
Variable/Assign: (Assign): /job:localhost/replica:0/task:0/device:CPU:0
Variable/read: (Identity): /job:localhost/replica:0/task:0/device:CPU:0
add: (Add): /job:localhost/replica:0/task:0/device:GPU:0
init: (NoOp): /job:localhost/replica:0/task:0/device:GPU:0
Const: (Const): /job:localhost/replica:0/task:0/device:GPU:0
Variable/initial_value: (Const): /job:localhost/replica:0/task:0/device:CPU:0

猜你喜欢

转载自blog.csdn.net/Arctic_Beacon/article/details/84976493