TensorFlow Android端编译过程记录

TensorFlow Android端编译过程记录

  • 分享一下TensorFlow Android端编译全过程
    • 参照
    • 编译环境
    • 环境搭建
      • 1、安装Bazel 0.24.1
      • 2、安装jdk1.8
      • 3、安装python 3.6.3
      • 4、安装Android SDK 23
      • 5、安装Android NDK
      • 6、下载TensorFlow 1.14.0 release版
    • 编译过程
      • 1、清空编译缓存
      • 2、configure 配置TensorFlow编译过程中需要的编译环境
      • 3、开始编译
    • 补充说明

分享一下TensorFlow Android端编译全过程

TensorFlow iOS 端编译过程
,一定要,而且要保证下载速度的!
首先说明:我是深度学习小白,关于深度学习TensorFlow不了解,因为公司需要将图像识别模型放在移动端设备上使用,所以才尝试编译它,万幸最后成功了!

参照

TensorFlow GitHub中的Android端编译指导
当然,编译过程中产生的问题都可以在issues中找找

编译环境

系统:mac os 10.14
bazel: 0.24.1
TensorFlow:1.14.0
Android SDK:23
Android NDK:17c
Python:3.6.3
jdk:1.8

环境搭建

1、安装Bazel 0.24.1

按照官网mac 上步骤安装即可,为什么安装0.24.1,因为其他的我试了,不合适。

2、安装jdk1.8

3、安装python 3.6.3

4、安装Android SDK 23

使用Android Studio或者官网下载安装即可

5、安装Android NDK

在官网下载或百度NDK 17下载后解压到sdk安装目录
为什么一定要是 17c ,因为其他的我都试了,然而只是浪费感情。

6、下载TensorFlow 1.14.0 release版

选择 1.14.0 的原因是,此版本我在mac上编译iOS和Android都成功了
PS:其他版本我也尝试过,编译过程中缺少文件。。。。

编译过程

1、清空编译缓存

如果之前编译过,最好是用一下,如果不用,不知道会出什么问题。

bazel clean --expunge

2、configure 配置TensorFlow编译过程中需要的编译环境

按照自己电脑配置情况按照步骤进行配置即可,以下是我的mac上的一些配置:
终端输入:

./configure

WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown".
You have bazel 0.24.1 installed.
Please specify the location of python. [Default is /usr/local/opt/python@2/bin/python2.7]: /Library/Frameworks/Python.framework/Versions/3.6/bin/python3.6

Found possible Python library paths:
  /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages
Please input the desired Python library path to use.  Default is [/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages]

Do you wish to build TensorFlow with XLA JIT support? [y/N]: n
No XLA JIT support will be enabled for TensorFlow.

Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: n
No OpenCL SYCL support will be enabled for TensorFlow.

Do you wish to build TensorFlow with ROCm support? [y/N]: n
No ROCm support will be enabled for TensorFlow.

Do you wish to build TensorFlow with CUDA support? [y/N]: n
No CUDA support will be enabled for TensorFlow.

Do you wish to download a fresh release of clang? (Experimental) [y/N]: n
Clang will not be downloaded.

Do you wish to build TensorFlow with MPI support? [y/N]: n
No MPI support will be enabled for TensorFlow.

Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native -Wno-sign-compare]: 

Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: y
Searching for NDK and SDK installations.

Please specify the home path of the Android NDK to use. [Default is /Users/mask/library/Android/Sdk/ndk-bundle]: /Users/mask/Library/Android/sdk/android-ndk-r17c 

Please specify the (min) Android NDK API level to use. [Available levels: ['14', '15', '16', '17', '18', '19', '21', '22', '23', '24', '26', '27', '28']] [Default is 18]: 21

Please specify the home path of the Android SDK to use. [Default is /Users/mask/library/Android/Sdk]: /Users/mask/Library/Android/sdk 

Please specify the Android SDK API level to use. [Available levels: ['23', '26', '29']] [Default is 29]: 23

Please specify an Android build tools version to use. [Available versions: ['.DS_Store', '28.0.3', '29.0.2']] [Default is 29.0.2]: 28.0.3

Do you wish to build TensorFlow with iOS support? [y/N]: n
No iOS support will be enabled for TensorFlow.

Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See .bazelrc for more details.
	--config=mkl         	# Build with MKL support.
	--config=monolithic  	# Config for mostly static monolithic build.
	--config=gdr         	# Build with GDR support.
	--config=verbs       	# Build with libverbs support.
	--config=ngraph      	# Build with Intel nGraph support.
	--config=numa        	# Build with NUMA support.
	--config=dynamic_kernels	# (Experimental) Build kernels into separate shared objects.
Preconfigured Bazel build configs to DISABLE default on features:
	--config=noaws       	# Disable AWS S3 filesystem support.
	--config=nogcp       	# Disable GCP support.
	--config=nohdfs      	# Disable HDFS support.
	--config=noignite    	# Disable Apache Ignite support.
	--config=nokafka     	# Disable Apache Kafka support.
	--config=nonccl      	# Disable NVIDIA NCCL support.
Configuration finished

3、开始编译

终端输入

bazel build -c opt //tensorflow/contrib/android:libtensorflow_inference.so \
   --verbose_failures \
   --crosstool_top=//external:android/crosstool \
   --host_crosstool_top=@bazel_tools//tools/cpp:toolchain \
   --cxxopt=-std=c++11 \
   --cpu=armeabi-v7a

开始编译
漫长的等待,前面的编译一般不会出现问题,但是最后关头会出现:

ERROR: /Users/mask/Desktop/tensorflow-1.14.0Android/tensorflow/contrib/android/BUILD:60:1: Linking of rule '//tensorflow/contrib/android:libtensorflow_inference.so' failed (Exit 1): clang failed: error executing command 
  (cd /private/var/tmp/_bazel_mask/acbc0c8b666a3bbc6e9563ed36a12d36/execroot/org_tensorflow && \
  exec env - \
    ANDROID_BUILD_TOOLS_VERSION=28.0.3 \
    ANDROID_NDK_API_LEVEL=21 \
    ANDROID_NDK_HOME=/Users/mask/Library/Android/sdk/android-ndk-r17c \
    ANDROID_SDK_API_LEVEL=23 \
    ANDROID_SDK_HOME=/Users/mask/Library/Android/sdk \
    PATH=/opt/local/bin:/opt/local/sbin:/Library/Frameworks/Python.framework/Versions/3.6/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Users/mask/bin:/Users/mask/Library/Android/sdk/android-ndk-r17c/ \
    PWD=/proc/self/cwd \
    PYTHON_BIN_PATH=/Library/Frameworks/Python.framework/Versions/3.6/bin/python3.6 \
    PYTHON_LIB_PATH=/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages \
    TF_CONFIGURE_IOS=0 \
    TF_DOWNLOAD_CLANG=0 \
    TF_NEED_CUDA=0 \
    TF_NEED_OPENCL_SYCL=0 \
    TF_NEED_ROCM=0 \
  external/androidndk/ndk/toolchains/llvm/prebuilt/darwin-x86_64/bin/clang -shared -o bazel-out/armeabi-v7a-opt/bin/tensorflow/contrib/android/libtensorflow_inference.so -Wl,-whole-archive bazel-out/armeabi-v7a-opt/bin/tensorflow/contrib/android/libandroid_tensorflow_inference_jni.lo -Wl,-no-whole-archive -Wl,-whole-archive bazel-out/armeabi-v7a-opt/bin/tensorflow/java/src/main/native/libnative.lo -Wl,-no-whole-archive -Wl,-whole-archive bazel-out/armeabi-v7a-opt/bin/tensorflow/core/libandroid_tensorflow_lib.lo -Wl,-no-whole-archive -Wl,-whole-archive bazel-out/armeabi-v7a-opt/bin/tensorflow/core/kernels/libandroid_tensorflow_kernels.lo -Wl,-no-whole-archive -Wl,-whole-archive bazel-out/armeabi-v7a-opt/bin/tensorflow/core/libandroid_tensorflow_lib_lite.lo -Wl,-no-whole-archive -Wl,-whole-archive bazel-out/armeabi-v7a-opt/bin/tensorflow/core/libplatform_base.a -Wl,-no-whole-archive -Wl,-whole-archive bazel-out/armeabi-v7a-opt/bin/external/com_google_absl/absl/hash/libhash.a -Wl,-no-whole-archive -Wl,-whole-archive bazel-out/armeabi-v7a-opt/bin/external/com_google_absl/absl/hash/libcity.a -Wl,-no-whole-archive -Wl,-whole-archive bazel-out/armeabi-v7a-opt/bin/external/com_google_absl/absl/types/libbad_variant_access.a -Wl,-no-whole-archive -Wl,-whole-archive bazel-out/armeabi-v7a-opt/bin/external/com_google_absl/absl/container/libraw_hash_set.a -Wl,-no-whole-archive -Wl,-whole-archive bazel-out/armeabi-v7a-opt/bin/external/com_google_absl/absl/types/libbad_optional_access.a -Wl,-no-whole-archive -Wl,-whole-archive bazel-out/armeabi-v7a-opt/bin/external/com_google_absl/absl/container/libhashtablez_sampler.a -Wl,-no-whole-archive -Wl,-whole-archive bazel-out/armeabi-v7a-opt/bin/external/com_google_absl/absl/synchronization/libsynchronization.a -Wl,-no-whole-archive -Wl,-whole-archive bazel-out/armeabi-v7a-opt/bin/external/com_google_absl/absl/synchronization/libgraphcycles_internal.a -Wl,-no-whole-archive -Wl,-whole-archive bazel-out/armeabi-v7a-opt/bin/external/com_google_absl/absl/debugging/libstacktrace.a -Wl,-no-whole-archive -Wl,-whole-archive bazel-out/armeabi-v7a-opt/bin/external/com_google_absl/absl/debugging/libsymbolize.a -Wl,-no-whole-archive -Wl,-whole-archive bazel-out/armeabi-v7a-opt/bin/external/com_google_absl/absl/debugging/libdebugging_internal.a -Wl,-no-whole-archive -Wl,-whole-archive bazel-out/armeabi-v7a-opt/bin/external/com_google_absl/absl/debugging/libdemangle_internal.a -Wl,-no-whole-archive -Wl,-whole-archive bazel-out/armeabi-v7a-opt/bin/external/com_google_absl/absl/base/libmalloc_internal.a -Wl,-no-whole-archive -Wl,-whole-archive bazel-out/armeabi-v7a-opt/bin/external/com_google_absl/absl/time/libtime.a -Wl,-no-whole-archive -Wl,-whole-archive bazel-out/armeabi-v7a-opt/bin/external/com_google_absl/absl/time/internal/cctz/libtime_zone.a -Wl,-no-whole-archive -Wl,-whole-archive bazel-out/armeabi-v7a-opt/bin/external/com_google_absl/absl/time/internal/cctz/libcivil_time.a -Wl,-no-whole-archive -Wl,-whole-archive bazel-out/armeabi-v7a-opt/bin/external/com_google_absl/absl/strings/libstrings.a -Wl,-no-whole-archive -Wl,-whole-archive bazel-out/armeabi-v7a-opt/bin/external/com_google_absl/absl/strings/libinternal.a -Wl,-no-whole-archive -Wl,-whole-archive bazel-out/armeabi-v7a-opt/bin/external/com_google_absl/absl/base/libthrow_delegate.a -Wl,-no-whole-archive -Wl,-whole-archive bazel-out/armeabi-v7a-opt/bin/external/com_google_absl/absl/base/libbase.a -Wl,-no-whole-archive -Wl,-whole-archive bazel-out/armeabi-v7a-opt/bin/external/com_google_absl/absl/base/libdynamic_annotations.a -Wl,-no-whole-archive -Wl,-whole-archive bazel-out/armeabi-v7a-opt/bin/external/com_google_absl/absl/base/libspinlock_wait.a -Wl,-no-whole-archive -Wl,-whole-archive bazel-out/armeabi-v7a-opt/bin/external/com_google_absl/absl/numeric/libint128.a -Wl,-no-whole-archive -Wl,-whole-archive bazel-out/armeabi-v7a-opt/bin/tensorflow/core/libstats_calculator_portable.a -Wl,-no-whole-archive -Wl,-whole-archive bazel-out/armeabi-v7a-opt/bin/external/double_conversion/libdouble-conversion.a -Wl,-no-whole-archive -Wl,-whole-archive bazel-out/armeabi-v7a-opt/bin/external/farmhash_archive/libfarmhash.a -Wl,-no-whole-archive -Wl,-whole-archive bazel-out/armeabi-v7a-opt/bin/external/nsync/libnsync_cpp.a -Wl,-no-whole-archive -Wl,-whole-archive bazel-out/armeabi-v7a-opt/bin/tensorflow/core/libprotos_all_proto_cc_impl.a -Wl,-no-whole-archive -Wl,-whole-archive bazel-out/armeabi-v7a-opt/bin/tensorflow/core/liberror_codes_proto_cc_impl.a -Wl,-no-whole-archive -Wl,-whole-archive bazel-out/armeabi-v7a-opt/bin/external/fft2d/libfft2d.a -Wl,-no-whole-archive -Wl,-whole-archive bazel-out/armeabi-v7a-opt/bin/external/protobuf_archive/libprotobuf.a -Wl,-no-whole-archive -Wl,-whole-archive bazel-out/armeabi-v7a-opt/bin/external/protobuf_archive/libprotobuf_lite.a -Wl,-no-whole-archive -Wl,-whole-archive bazel-out/armeabi-v7a-opt/bin/external/zlib_archive/libzlib.a -Wl,-no-whole-archive external/androidndk/ndk/sources/cxx-stl/llvm-libc++/libs/armeabi-v7a/libandroid_support.a external/androidndk/ndk/sources/cxx-stl/llvm-libc++/libs/armeabi-v7a/libc++.a external/androidndk/ndk/sources/cxx-stl/llvm-libc++/libs/armeabi-v7a/libc++_static.a external/androidndk/ndk/sources/cxx-stl/llvm-libc++/libs/armeabi-v7a/libc++abi.a external/androidndk/ndk/sources/cxx-stl/llvm-libc++/libs/armeabi-v7a/libunwind.a -landroid -latomic -ldl -llog -lm -z defs -s -Wl,--gc-sections -Wl,--version-script,tensorflow/contrib/android/jni/version_script.lds -ldl -lz -pthread -pthread -pthread -framework Foundation -lm -lm -target armv7-none-linux-androideabi -Wl,--fix-cortex-a8 -gcc-toolchain external/androidndk/ndk/toolchains/arm-linux-androideabi-4.9/prebuilt/darwin-x86_64 -no-canonical-prefixes -Lexternal/androidndk/ndk/sources/cxx-stl/llvm-libc++/libs/armeabi-v7a -static-libgcc '--sysroot=external/androidndk/ndk/platforms/android-21/arch-arm')
Execution platform: @bazel_tools//platforms:host_platform
external/androidndk/ndk/toolchains/arm-linux-androideabi-4.9/prebuilt/darwin-x86_64/lib/gcc/arm-linux-androideabi/4.9.x/../../../../arm-linux-androideabi/bin/ld: error: cannot open Foundation: No such file or directory
clang: error: linker command failed with exit code 1 (use -v to see invocation)
Target //tensorflow/contrib/android:libtensorflow_inference.so failed to build
INFO: Elapsed time: 2413.791s, Critical Path: 324.00s
INFO: 942 processes: 942 local.
FAILED: Build did NOT complete successfully

解决方式
打开文件:bazel-tensorflow/external/com_google_absl/absl/time/internal/cctz/Build.bazel
删除以下Blocks:

config_setting(
	name = "osx",
	constraint_values = [
		"@bazel_tools//platforms:osx",
	],
)

config_setting(
     name = "ios",
     constraint_values = [
         "@bazel_tools//platforms:ios",
     ],
 )

linkopts = select({
     ":osx": [
         "-framework Foundation",
     ],
     ":ios": [
         "-framework Foundation",
     ],
     "//conditions:default": [],
 }),

删除后重新输入,编译

bazel build -c opt //tensorflow/contrib/android:libtensorflow_inference.so \
   --verbose_failures \
   --crosstool_top=//external:android/crosstool \
   --host_crosstool_top=@bazel_tools//tools/cpp:toolchain \
   --cxxopt=-std=c++11 \
   --cpu=armeabi-v7a

接下来 .so 文件会编译成功!
位置:bazel-bin/tensorflow/contrib/android/libtensorflow_inference.so
最好是复制一份,因为接下来编译 .jar 会覆盖目录
编译 .jar
终端输入:

bazel build //tensorflow/contrib/android:android_tensorflow_inference_java

此过程很快,一般不会出现问题,编译结果:
bazel-bin/tensorflow/contrib/android/libandroid_tensorflow_inference_java.jar
当然还有libandroid_tensorflow_inference_java-native-header.jar

补充说明

我在交付Android开发人员使用时一些问题:
1、如需支持其他cpu ,请修改编译命令,单独编译:–cpu = armeabi …
2、如armeabi-v7a等不支持64位运算的cpu,运行pb模型时,如果模型使用了64位运算,会出错~,这个如何解决:要不找厂家,要不找开发,自己选。。。
加载pb模型,NO OpKernel was registered to Op ‘…’ 问题
可使用以下命令查找op所在的文件:

grep 'REGISTER.*"..."' tensorflow/core/kernels/*.cc

找到后打开tensorflow/core/kernels/BUILD文件:
并在android_extended_ops_group1中添加.cc文件,如果有.h文件
需要在android_extended_ops_headers中添加对应的.h文件

你可能感兴趣的:(TensorFlow,编译,TensorFlow,iOS,Android)