I just installed the Ubuntu18.04, and have not install the Chinese font yet. So the following is edited by English.
Get CUDA from official site
CUDA 10
Installation instructions:
sudo dpkg -i cuda-repo-ubuntu1804_10.0.130-1_amd64.deb
sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub
sudo apt-get update
sudo apt-get install cuda
Reboot
nvidia-smi
# you should see a list of gpus printed
# if not, the previous steps failed.
Add enviroment variable
get to the HOME folder, ctrl+h to show hidden files. Double click to open the ~/.bashrc, then add flowing lines to to end of ~/.bashrc:
export PATH=/usr/local/cuda/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
reload bashrc
source ~/.bashrc
sudo gedit /etc/profile
add the line:
export PATH=/usr/local/cuda/bin:$PATH
sudo gedit /etc/ld.so.conf.d/cuda.conf
add the line:
/usr/local/cuda/lib64
source /etc/profile
sudo ldconfig
sudo gedit ~/.bash_profile
add the lines
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
source ~/.bash_profile
sudo ldconfig
cd /usr/local/cuda/samples
sudo make all -j4
cd /usr/local/cuda/samples/bin/x86_64/linux/release
./deviceQuery
get a Debian File of cudnn from official site(membership required)
sudo dpkg -i libcudnn7_7.3.1.20-1+cuda10.0_amd64.deb
sudo dpkg -i libcudnn7-dev_7.3.1.20-1+cuda10.0_amd64.deb
sudo dpkg -i libcudnn7-doc_7.3.1.20-1+cuda10.0_amd64.deb
To verify that cuDNN is installed and is running properly, compile the mnistCUDNN sample located in the /usr/src/cudnn_samples_v7 directory in the debian file.
$cp -r /usr/src/cudnn_samples_v7/ $HOME
$ cd $HOME/cudnn_samples_v7/mnistCUDNN
$make clean && make
$ ./mnistCUDNN
If cuDNN is properly installed and running on your Linux system, you will see a message similar to the following: Test passed!
Navigator to your directory of anaconda containing SHELL file.
bash Anaconda3-5.3.0-Linux-x86_64.sh
$ conda update -n base -c defaults conda
Add the mirrors of conda files hosted by Tsinghua and USTC.
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/
conda config --set show_channel_urls yes
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge/
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/msys2/
conda config --add channels https://mirrors.ustc.edu.cn/anaconda/pkgs/free/
conda config --add channels https://mirrors.ustc.edu.cn/anaconda/pkgs/main/
conda config --set show_channel_urls yes
conda config --add channels https://mirrors.ustc.edu.cn/anaconda/cloud/conda-forge/
conda config --add channels https://mirrors.ustc.edu.cn/anaconda/cloud/msys2/
conda config --add channels https://mirrors.ustc.edu.cn/anaconda/cloud/bioconda/
conda config --add channels https://mirrors.ustc.edu.cn/anaconda/cloud/menpo/
If the channels failed, remove them.
conda config --show channels
conda config --remove channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge/
Creat an environment and install
conda create -n tensorflow
source activate tensorflow
conda install tensorflow-gpu
Install spyder
source activate tensorflow
conda install spyder
import tensorflow as tf
a=tf.constant(1)
b=tf.Variable(2)
c = a+b
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True, log_device_placement=True))
sess.run(init)
c=sess.run(c)
You should get the flowing output, otherwise you failed.
Device mapping:
/job:localhost/replica:0/task:0/device:XLA_CPU:0 -> device: XLA_CPU device
/job:localhost/replica:0/task:0/device:XLA_GPU:0 -> device: XLA_GPU device
/job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1
Variable: (VariableV2): /job:localhost/replica:0/task:0/device:CPU:0
Variable/Assign: (Assign): /job:localhost/replica:0/task:0/device:CPU:0
Variable/read: (Identity): /job:localhost/replica:0/task:0/device:CPU:0
add: (Add): /job:localhost/replica:0/task:0/device:GPU:0
init: (NoOp): /job:localhost/replica:0/task:0/device:GPU:0
Const: (Const): /job:localhost/replica:0/task:0/device:GPU:0
Variable/initial_value: (Const): /job:localhost/replica:0/task:0/device:CPU:0