tensorflow build source shell __ for centos

from:

http://www.ehu.eus/ehusfera/hpc/2016/04/07/installing-tensorflow-0-7-in-red-hat-enterprise-linux-server-6-4-with-gpus/comment-page-1/

http://blog.csdn.net/mafeiyu80/article/details/51397795

http://blog.abysm.org/2016/06/building-tensorflow-centos-6/


# centos tensorflow compile
# =========================================================================
#1 install basic tools
yum -y install unzip git-all pkg-config zip g++ zlib1g-dev zlib-devel

# =========================================================================
#install jdk8
yum remove java-1.6.0-openjdk
yum remove java-1.7.0-openjdk

download "http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html" jdk-8u101-linux-x64.tar.gz
cp jdk-8u101-linux-x64.tar.gz /opt
tar -xvf jdk-8u101-linux-x64.tar.gz
cd jdk1.8.0_101/
alternatives --install /usr/bin/java java /opt/jdk1.8.0_101/bin/java 2
alternatives --config java
alternatives --install /usr/bin/jar jar /opt/jdk1.8.0_101/bin/jar 2
alternatives --install /usr/bin/javac javac /opt/jdk1.8.0_101/bin/javac 2
alternatives --set jar /opt/jdk1.8.0_101/bin/jar
alternatives --set javac /opt/jdk1.8.0_101/bin/javac
export JAVA_HOME=/opt/jdk1.8.0_101
export JRE_HOME=/opt/jdk1.8.0_101/jre
export PATH=$PATH:/opt/jdk1.8.0_101/bin:/opt/jdk1.8.0_101/jre/bin

# =========================================================================
# Maybe:
# update dev-tools
strings /usr/lib64/libstdc++.so.6 | grep GLIBC
# Install the collection:
sudo yum install devtoolset-3
# Start using software collections:
scl enable devtoolset-3 bash
echo "source /opt/rh/devtoolset-3/enable" >> ~/.bashrc
sudo yum list devtoolset-3\*

# =========================================================================
# install bazel version 0.2.2
wget https://goo.gl/OQ2ZCl -O bazel-installer-linux-x86_64.sh
chmod +x bazel-installer-linux-x86_64.sh
sudo ./bazel-installer-linux-x86_64.sh
rm bazel-installer-linux-x86_64.sh
sudo chown $USER:$USER ~/.cache/bazel/

# =========================================================================
# install python deps
??? yum -y install swig
??? yum install python-devel python-nose python-setuptools

??? yum -y install numpy python-devel scipy

??? sudo yum -y install epel-release
??? yum -y install python-pip
??? yum install python-wheel
??? sudo pip install numpy

# =========================================================================

# !!!IMPORTANT: Just use python2.7, Don't need update here!!!

#

# update python 3.5.1
wget https://www.python.org/ftp/python/3.5.1/Python-3.5.1.tgz
tar -xvf Python-3.5.1.tgz
cd Python-3.5.1
./configure
make
sudo make install
cd ../
rm Python-3.5.1.tgz
sudo echo "alias python=python3.5" >> ~/.bashrc
source ~/.bashrc

# =========================================================================
echo -e "\e[36m***Cloning TensorFlow from GitHub*** \e[0m"
git clone --recurse-submodules -b r0.8 https://github.com/tensorflow/tensorflow.git
sed -i 's/kDefaultTotalBytesLimit = 64/kDefaultTotalBytesLimit = 128/' tensorflow/google/protobuf/src/google/protobuf/io/coded_stream.h

./configure

# optimize option

bazel build -c opt --copt=-mavx --copt=-mavx2 --copt=-mfma --copt=-mfpmath=both --copt=-msse4.2

bazel --output_base=/search/ted/ypzhang/workspace/bazel/output --output_user_root=/search/ted/ypzhang/workspace/bazel build -c opt --config=cuda //tensorflow/cc:tutorials_example_trainer

bazel --output_base=/search/ted/ypzhang/workspace/bazel/output --output_user_root=/search/ted/ypzhang/workspace/bazel build -c opt --config=cuda //tensorflow/tools/pip_package:build_pip_package

bazel-bin/tensorflow/tools/pip_package/build_pip_package /search/ted/ypzhang/workspace/tf_whl/tensorflow.pkg

pip install /search/ted/ypzhang/workspace/tf_whl/tensorflow.pkg/*


# =========================================================================

# fix protobuf 64M limit

pip uninstall protobuf
cd google/protobuf
./autogen.sh
./configure
make
make check
sudo make install

Then update your LD_LIBRARY_PATH as it says in google/protobuf/src/README.md
Then

cd python
python setup.py build --cpp_implementation
python setup.py test --cpp_implementation
sudo python setup.py install --cpp_implementation

By default, the package will be installed to /usr/local. However, on many platforms, /usr/local/lib is not part of LD_LIBRARY_PATH. You can add it

bazel build -c opt --config=cuda --define=use_fast_cpp_protos=true //tensorflow/tools/pip_package:build_pip_package
bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg/
pip install --no-cache-dir --upgrade /tmp/tensorflow_pkg/tensorflow-0.8.0-py2-none-any.whl


///==============================================
1 cannt find gcc: unset CC
  export TF_NEED_OPENCL=0
  export TF_NEED_GCP=0
  export TF_NEED_HDFS=0
  export TF_NEED_CUDA=1
  ./configure
2 bazel --output_base=/search/odin/ypzhang/workspace/bazel/output --output_user_root=/search/odin/ypzhang/workspace/bazel build -c opt --config=cuda //tensorflow/cc:tutorials_example_trainer
  

Do you wish to build TensorFlow with jemalloc as malloc support? [Y/n]:
jemalloc是一个更好的内存管理组件,当然要。

Do you wish to build TensorFlow with Google Cloud Platform support? [Y/n]:
支持GCP,国内普通用户没这个需求。

Do you wish to build TensorFlow with Hadoop File System support? [Y/n]:
支持HDFS,这个生成环境很有用,训练数据和模型都可以放在HDFS上,避免数据拉来拉去的情况。

Do you wish to build TensorFlow with Amazon S3 File System support? [Y/n]:
支持亚马逊的S3存储,国内普通用户没这个需求。

Do you wish to build TensorFlow with XLA JIT support? [y/N]:
加速线性代数(融合可组合运算来提升性能, 通过极端专门化减小可执行文件大小),这是运行时的优化,理解JVM的JIT就容易理解这个。

Do you wish to build TensorFlow with GDR support?
是否支持GDR,如果你不支持CUDA,就不要选这个了。这个是要硬件支持的。如果支持,可以使用grpc+gdr交换参数。

Do you wish to build TensorFlow with VERBS support? 
这个和GDR类似,使用verbs库来交换参数,也就是remote direct memory access(RDMA)。如果你使用了InfiniBand的卡,可以启用这个。

Do you wish to build TensorFlow with MPI support?
是否启用MPI支持,和GDR, VERBS的作用是一样的,如果选择需要制定路径。MPI可以选择Open MPI(可以用普通网卡也可以用兼容MPI的专有设备),如果是Intel MPI,可以指定类似 /opt/intel/impi/2018.1.163/intel64 的路径

Do you wish to build TensorFlow with OpenCL support? 
OpenCL不建议启用, 这是一个开发的计算框架,但异构计算的事实标准是CUDA,如果你启用了OpenCL,还要安装下面的ConputeCPP for SYCL 1.2

Do you wish to build TensorFlow with CUDA support? 
如果有较新的N卡就选择支持,训练预测快不快这是一个关键。

Please specify the CUDA SDK version you want to use, e.g. 7.0. [Leave empty to default to CUDA 8.0]:
建议你使用CUDA 8.0版本

Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]:
优化选项,这里指CPU指令集优化,建议默认,也就是march=native,尤其是在生成环境,这会针对你的CPU扩展指令优化(如sse4.1 sse4.2 avx avx2 fma)。

Add "--config=mkl" to your bazel command to build with MKL support.

告诉你还可以开启mkl,MKL是Intel® MKL-DNN,类似于cuDNN,可以发挥Intel® Xeon®和Intel® Xeon Phi™系统处理器的潜力,在桌面级的i5, i7 CPU上也有效。
这个优化目前仅在Linux系统上有效,和CUDA的支持也是互斥的。这个优化对于大部分的用户可以有,生产环节用AMD CPU的真不多见。
for copt

 gcc -march=native  -Q --help=target 

--copt=-mavx --copt=-mavx2 --copt=-mfma --copt=-msse4.2 -mfpmath=both

bazel build -c opt --copt=-mavx --copt=-mavx2 --copt=-mfma --copt=-mfpmath=both --copt=-msse4.2 -k //tensorflow/tools/pip_package:build_pip_package

bazel build tensorflow/python/tools:freeze_graph


# copt option


Most probably you have not installed TF from source and instead of it used something like pip install tensorflow. That means that you installed pre-built (by someone else) binaries which were not optimized for your architecture. And these warnings tell you exactly this: something is available on your architecture, but it will not be used because the binary was not compiled with it. Here is the part from documentation.

TensorFlow checks on startup whether it has been compiled with the optimizations available on the CPU. If the optimizations are not included, TensorFlow will emit warnings, e.g. AVX, AVX2, and FMA instructions not included.

Good thing is that most probably you just want to learn/experiment with TF so everything will work properly and you should not worry about it


What are SSE4.2 and AVX?

Wikipedia has a good explanation about SSE4.2 and AVX. This knowledge is not required to be good at machine-learning. You may think about them as a set of some additional instructions for a computer to use multiple data points against a single instruction to perform operations which may be naturally parallelized (for example adding two arrays).

Both SSE and AVX are implementation of an abstract idea of SIMD (Single instruction, multiple data), which is

a class of parallel computers in Flynn's taxonomy. It describes computers with multiple processing elements that perform the same operation on multiple data points simultaneously. Thus, such machines exploit data level parallelism, but not concurrency: there are simultaneous (parallel) computations, but only a single process (instruction) at a given moment

This is enough to answer your next question.


How do these SSE4.2 and AVX improve CPU computations for TF tasks

They allow a more efficient computation of various vector (matrix/tensor) operations. You can read more in these slides


How to make Tensorflow compile using the two libraries?

You need to have a binary which was compiled to take advantage of these instructions. The easiest way is to compile it yourself. As Mike and Yaroslav suggested, you can use the following bazel command

bazel build -c opt --copt=-mavx --copt=-mavx2 --copt=-mfma --copt=-mfpmath=both --copt=-msse4.2 --config=cuda -k //tensorflow/tools/pip_package:build_pip_package


你可能感兴趣的:(tensorflow build source shell __ for centos)