IOT设备AI搭建3:TF Lite构建过程解析

系列目录:

IOT设备AI搭建1:Linux上构建Tensorflow Lite
IOT设备AI搭建2:树莓派部署TF Lite(图片分类实例)
IOT设备AI搭建3:TF Lite构建过程解析
IOT设备AI搭建4:MakeFile基础语法

上一章节我们讲解了Linux上编译和交叉编译TF LITE的过程,本章节,我们详细看下详细的编译过程

build_rpi_lib

以构建树莓派的静态包为例:

./tensorflow/lite/tools/make/build_rpi_lib.sh

build_rpi_lib的内容为

set -x
set -e

# 获取当前目录,指的是/tensorflow/lite/tools/make目录
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
TENSORFLOW_DIR="${SCRIPT_DIR}/../../../.."

FREE_MEM="$(free -m | awk '/^Mem/ {print $2}')"
# Use "-j 4" only memory is larger than 2GB
if [[ "FREE_MEM" -gt "2000" ]]; then
  NO_JOB=4
else
  NO_JOB=1
fi

if [[ ! -z "${TARGET_ARCH}" ]]; then
  make -j ${NO_JOB} TARGET=rpi -C "${TENSORFLOW_DIR}" -f tensorflow/lite/tools/make/Makefile TARGET_ARCH=${TARGET_ARCH}
else
  make -j ${NO_JOB} TARGET=rpi -C "${TENSORFLOW_DIR}" -f tensorflow/lite/tools/make/Makefile TARGET_ARCH=armv7l
fi

主要代码逻辑为:

  1. 根据可用内存,设置编译线程数目;
  2. 根据当前文件指定的TARGET调用tensorflow/lite/tools/make/Makefile文件;Makefile中会根据TARGET设置的值不同,去设置不同的编译参数;
    设置值的文件位置为:
(base) jiadongfeng@jiadongfeng:~/tensorflow/lite/tools/make/targets$ ls
aarch64_makefile.inc  ios_makefile.inc    riscv_makefile.inc  stm32f1_makefile.inc
bbb_makefile.inc      linux_makefile.inc  rpi_makefile.inc    stm32f7_makefile.inc

例如树莓派的配置内容为(仅仅展示TARGET_ARCH=armv7l的情况):

# Settings for Raspberry Pi.
ifeq ($(TARGET),rpi)
  # Default to the architecture used on the Pi Two/Three (ArmV7), but override this
  # with TARGET_ARCH=armv6 to build for the Pi Zero or One.
  TARGET_ARCH := armv7l

  ifeq ($(TARGET_ARCH), armv7l)
    TARGET_TOOLCHAIN_PREFIX := arm-linux-gnueabihf-
    CXXFLAGS += \
      -march=armv7-a \
      -mfpu=neon-vfpv4 \
      -funsafe-math-optimizations \
      -ftree-vectorize \
      -fPIC

    CFLAGS += \
      -march=armv7-a \
      -mfpu=neon-vfpv4 \
      -funsafe-math-optimizations \
      -ftree-vectorize \
      -fPIC

    LDFLAGS := \
      -Wl,--no-export-dynamic \
      -Wl,--exclude-libs,ALL \
      -Wl,--gc-sections \
      -Wl,--as-needed

    BUILD_WITH_RUY := true
  endif

 ...

tensorflow/lite/tools/make/Makefile加载配置文件的方式为:


# These target-specific makefiles should modify or replace options like
# CXXFLAGS or LIBS to work for a specific targetted architecture. All logic
# based on platforms or architectures should happen within these files, to
# keep this main makefile focused on the sources and dependencies.
# 加载参数文件,
include $(wildcard $(MAKEFILE_DIR)/targets/*_makefile.inc)

Makefile

文件内容比较大,我们分段来看

1. 获取当前调用指定的平台和对应的CPU架构,可以指定;如build_rpi_lib文件,会指定TARGET=rpi和TARGET_ARCH=armv7l;
没有指定的情况下,会通过获取当前主机的信息指定TARGET和TARGET_ARCH

# Try to figure out the host system
HOST_OS :=
ifeq ($(OS),Windows_NT)
    HOST_OS = windows
else
    UNAME_S := $(shell uname -s)
    ifeq ($(UNAME_S),Linux)
        HOST_OS := linux
    endif
    ifeq ($(UNAME_S),Darwin)
        HOST_OS := osx
    endif
endif

HOST_ARCH := $(shell if uname -m | grep -q i[345678]86; then echo x86_32; else uname -m; fi)

# Override these on the make command line to target a specific architecture. For example:
# make -f tensorflow/lite/tools/make/Makefile TARGET=rpi TARGET_ARCH=armv7l
TARGET := $(HOST_OS)
TARGET_ARCH := $(HOST_ARCH)

TARGET:当前编译的制定平台,主要有windows,linux,osx
TARGET_ARCH:制定平台对应的cpu架构

2. 获取编译依赖的项


INCLUDES := \
-I. \
-I$(MAKEFILE_DIR)/../../../../../ \
-I$(MAKEFILE_DIR)/../../../../../../ \
-I$(MAKEFILE_DIR)/downloads/ \
-I$(MAKEFILE_DIR)/downloads/eigen \
-I$(MAKEFILE_DIR)/downloads/absl \
-I$(MAKEFILE_DIR)/downloads/gemmlowp \
-I$(MAKEFILE_DIR)/downloads/neon_2_sse \
-I$(MAKEFILE_DIR)/downloads/farmhash/src \
-I$(MAKEFILE_DIR)/downloads/flatbuffers/include \
-I$(MAKEFILE_DIR)/downloads/fp16/include \
-I$(OBJDIR)
# This is at the end so any globally-installed frameworks like protobuf don't
# override local versions in the source tree.
INCLUDES += -I/usr/local/include


3. 根据目标cpu架构不同,选择不同的编译选项

# These are the default libraries needed, but they can be added to or
# overridden by the platform-specific settings in target makefiles.
LIBS := \
-lstdc++ \
-lpthread \
-lm \
-lz

# There are no rules for compiling objects for the host system (since we don't
# generate things like the protobuf compiler that require that), so all of
# these settings are for the target compiler.
CFLAGS := -O3 -DNDEBUG -fPIC
CXXFLAGS := $(CFLAGS) --std=c++11 $(EXTRA_CXXFLAGS)
LDOPTS := -L/usr/local/lib
ARFLAGS := -r
TARGET_TOOLCHAIN_PREFIX :=
CC_PREFIX :=

ifeq ($(HOST_OS),windows)
CXXFLAGS += -fext-numeric-literals -D__LITTLE_ENDIAN__
endif

# Auto-detect optimization opportunity if building natively.
ifeq ($(HOST_OS),$(TARGET))
ifeq ($(HOST_ARCH),$(TARGET_ARCH))
ifeq ($(TARGET_ARCH),armv7l)
ifneq ($(shell cat /proc/cpuinfo | grep Features | grep neon),)
  ifneq ($(shell cat /proc/cpuinfo | grep Features | grep vfpv4),)
    CXXFLAGS += -mfpu=neon-vfpv4
  else
    CXXFLAGS += -mfpu=neon
  endif
endif # ifeq ($(TARGET_ARCH),armv7l)
endif # ifeq ($(HOST_ARCH),$(TARGET_ARCH))
endif # ifeq ($(HOST_OS),$(TARGET))
endif

  • CFLAGS 表示用于 C 编译器的选项,
    1, O1 提供基础级别的优化;-O2提供更加高级的代码优化,会占用更长的编译时间;-O3提供最高级的代码优化
    2, DNDEBUG参数,定义NDEBUG宏,屏蔽断言
    3, fPIC则表明使用地址无关代码

  • CXXFLAGS 表示用于 C++ 编译器的选项。
    1, std=c++11,支持C++11标准
    这两个变量实际上涵盖了编译和汇编两个步骤。
    -LDFLAGS:gcc 等编译器会用到的一些优化参数,也可以在里面指定库文件的位置。用法:LDFLAGS=-L/usr/lib -L/path/to/your/lib。每安装一个包都几乎一定的会在安装目录里建立一个lib目录。如果明明安装了某个包,而安装另一个包时,它愣是说找不到,可以抒那个包的lib路径加入的LDFALGS中试一下。

  • CFLAGS: 指定头文件(.h文件)的路径,如:CFLAGS=-I/usr/include -I/path/include。同样地,安装一个包时会在安装路径下建立一个include目录,当安装过程中出现问题时,试着把以前安装的包的include目录加入到该变量中来。

  • LIBS:告诉链接器要链接哪些库文件,如LIBS = -lpthread -liconv

4. 定义最终编译成的BENCHMARK静态包的名称

# This library is the main target for this makefile. It will contain a tf_common
# runtime that can be linked in to other programs.
LIB_NAME := libtensorflow-lite.a

# Benchmark static library and binary
BENCHMARK_LIB_NAME := benchmark-lib.a
BENCHMARK_BINARY_NAME := benchmark_model
BENCHMARK_PERF_OPTIONS_BINARY_NAME := benchmark_model_performance_options

tensorflow中的 benchmark_model 工具可以帮助估算出模型所需的浮点操作数(FLOPS),然后你就可以使用这些信息来确定你的模型在你的目标设备上运行的可行性

5. 获取编译所依赖的tf lite源码和BENCHMARK源码

# Override these on the make command line to target a program
# make -f tensorflow/lite/tools/make/Makefile TARGET_PROGRAM_SRCS=tensorflow/lite/examples/label_image/label_image.cc
# A small example program that shows how to link against the library.
#要编译的demo的cc文件
TARGET_PROGRAM_SRCS := \
tensorflow/lite/examples/label_image/label_image.cc \
tensorflow/lite/examples/label_image/bitmap_helpers.cc


# What sources we want to compile, must be kept in sync with the main Bazel
# build files.
#内存日志和时间cc文件,用来调试程序
PROFILER_SRCS := \
  tensorflow/lite/profiling/memory_info.cc \
    tensorflow/lite/profiling/time.cc

PROFILE_SUMMARIZER_SRCS := \
    tensorflow/lite/profiling/profile_summarizer.cc \
    tensorflow/lite/profiling/profile_summary_formatter.cc \
    tensorflow/core/util/stats_calculator.cc

#CMD命令行文件
CMD_LINE_TOOLS_SRCS := \
    tensorflow/lite/tools/command_line_flags.cc

#所依赖的lite核心库文件
CORE_CC_ALL_SRCS := \
$(wildcard tensorflow/lite/*.cc) \
$(wildcard tensorflow/lite/*.c) \
$(wildcard tensorflow/lite/c/*.c) \
$(wildcard tensorflow/lite/core/*.cc) \
$(wildcard tensorflow/lite/core/api/*.cc) \
$(wildcard tensorflow/lite/experimental/resource/*.cc) \
$(wildcard tensorflow/lite/experimental/ruy/*.cc)

#所依赖的lite kernels文件
$(error $(CORE_CC_ALL_SRCS))
ifneq ($(BUILD_TYPE),micro)
CORE_CC_ALL_SRCS += \
$(wildcard tensorflow/lite/kernels/*.cc) \
$(wildcard tensorflow/lite/kernels/internal/*.cc) \
$(wildcard tensorflow/lite/kernels/internal/optimized/*.cc) \
$(wildcard tensorflow/lite/kernels/internal/reference/*.cc) \
$(wildcard tensorflow/lite/tools/optimize/sparsity/*.cc) \
$(PROFILER_SRCS) \
tensorflow/lite/tools/make/downloads/farmhash/src/farmhash.cc \
tensorflow/lite/tools/make/downloads/fft2d/fftsg.c \
tensorflow/lite/tools/make/downloads/fft2d/fftsg2d.c \
tensorflow/lite/tools/make/downloads/flatbuffers/src/util.cpp
#google的公共基础库
CORE_CC_ALL_SRCS += \
    $(shell find tensorflow/lite/tools/make/downloads/absl/absl/ \
                 -type f -name \*.cc | grep -v test | grep -v benchmark | grep -v synchronization | grep -v debugging | grep -v hash | grep -v flags)
endif
# Remove any duplicates.
CORE_CC_ALL_SRCS := $(sort $(CORE_CC_ALL_SRCS))
#从依赖库删除不需要的测试代码和工具代码
CORE_CC_EXCLUDE_SRCS := \
$(wildcard tensorflow/lite/*test.cc) \
$(wildcard tensorflow/lite/*/*test.cc) \
$(wildcard tensorflow/lite/*/*/benchmark.cc) \
$(wildcard tensorflow/lite/*/*/example*.cc) \
$(wildcard tensorflow/lite/*/*/test*.cc) \
$(wildcard tensorflow/lite/*/*/*test.cc) \
$(wildcard tensorflow/lite/*/*/*tool.cc) \
$(wildcard tensorflow/lite/*/*/*/*test.cc) \
$(wildcard tensorflow/lite/kernels/*test_main.cc) \
$(wildcard tensorflow/lite/kernels/*test_util*.cc) \
tensorflow/lite/experimental/ruy/tune_tool.cc \
$(TARGET_PROGRAM_SRCS)

#判断是否使用内存映射
BUILD_WITH_MMAP ?= true
ifeq ($(BUILD_TYPE),micro)
    BUILD_WITH_MMAP=false
endif
ifeq ($(BUILD_TYPE),windows)
    BUILD_WITH_MMAP=false
endif
ifeq ($(BUILD_WITH_MMAP),true)
    CORE_CC_EXCLUDE_SRCS += tensorflow/lite/mmap_allocation.cc
else
    CORE_CC_EXCLUDE_SRCS += tensorflow/lite/mmap_allocation_disabled.cc
endif

BUILD_WITH_RUY ?= false
ifeq ($(TARGET_ARCH),aarch64)
    BUILD_WITH_RUY=true
endif
ifeq ($(BUILD_WITH_RUY),true)
  CXXFLAGS += -DTFLITE_WITH_RUY
endif

#如果使用内存映射,则添加对应的依赖代码
BUILD_WITH_NNAPI ?= false
ifeq ($(BUILD_WITH_NNAPI),true)
    CORE_CC_ALL_SRCS += tensorflow/lite/delegates/nnapi/nnapi_delegate.cc
    CORE_CC_ALL_SRCS += tensorflow/lite/delegates/nnapi/quant_lstm_sup.cc
    CORE_CC_ALL_SRCS += tensorflow/lite/nnapi/nnapi_implementation.cc
    CORE_CC_ALL_SRCS += tensorflow/lite/nnapi/nnapi_util.cc
    LIBS += -lrt
else
    CORE_CC_ALL_SRCS += tensorflow/lite/delegates/nnapi/nnapi_delegate_disabled.cc
    CORE_CC_ALL_SRCS += tensorflow/lite/nnapi/nnapi_implementation_disabled.cc
endif

#如果编译目标平台是ios,则添加对应的logging依赖库
ifeq ($(TARGET),ios)
    CORE_CC_EXCLUDE_SRCS += tensorflow/lite/minimal_logging_android.cc
    CORE_CC_EXCLUDE_SRCS += tensorflow/lite/minimal_logging_default.cc
else
    CORE_CC_EXCLUDE_SRCS += tensorflow/lite/minimal_logging_android.cc
    CORE_CC_EXCLUDE_SRCS += tensorflow/lite/minimal_logging_ios.cc
endif

# Filter out all the excluded files.
#获取最终的依赖库
TF_LITE_CC_SRCS := $(filter-out $(CORE_CC_EXCLUDE_SRCS), $(CORE_CC_ALL_SRCS))

# Benchmark sources
# Benchmark工具源文件
BENCHMARK_SRCS_DIR := tensorflow/lite/tools/benchmark
EVALUATION_UTILS_SRCS := \
  tensorflow/lite/tools/evaluation/utils.cc
BENCHMARK_ALL_SRCS := \
    $(wildcard $(BENCHMARK_SRCS_DIR)/*.cc) \
    $(PROFILE_SUMMARIZER_SRCS) \
    $(CMD_LINE_TOOLS_SRCS) \
    $(EVALUATION_UTILS_SRCS)

BENCHMARK_MAIN_SRC := $(BENCHMARK_SRCS_DIR)/benchmark_main.cc
BENCHMARK_PERF_OPTIONS_SRC := \
    $(BENCHMARK_SRCS_DIR)/benchmark_tflite_performance_options_main.cc

# 最终的所有的BENCHMARK依赖包
BENCHMARK_LIB_SRCS := $(filter-out \
    $(wildcard $(BENCHMARK_SRCS_DIR)/*_test.cc) \
    $(BENCHMARK_MAIN_SRC) \
    $(BENCHMARK_PERF_OPTIONS_SRC) \
    $(BENCHMARK_SRCS_DIR)/benchmark_plus_flex_main.cc, \
    $(BENCHMARK_ALL_SRCS))

6. 为不同的目标平台设置不同的编译参数

设置参数的文件地址为,里面的文件主要是根据TARGET的参数设置TARGET_TOOLCHAIN_PREFIX,CXXFLAGS,CFLAGS
,LDFLAGS,LIBS等值,从以下文件的名称可知,tflite支持在aarch64,ios,RISCV,stm32f1,标准linux,树莓派,黑莓等平台
编译

(base) jiadongfeng@jiadongfeng:~/tensorflow/lite/tools/make/targets$ ls
aarch64_makefile.inc  ios_makefile.inc    riscv_makefile.inc  stm32f1_makefile.inc
bbb_makefile.inc      linux_makefile.inc  rpi_makefile.inc    stm32f7_makefile.inc


# These target-specific makefiles should modify or replace options like
# CXXFLAGS or LIBS to work for a specific targetted architecture. All logic
# based on platforms or architectures should happen within these files, to
# keep this main makefile focused on the sources and dependencies.
# 加载参数文件,
include $(wildcard $(MAKEFILE_DIR)/targets/*_makefile.inc)

7. 设置最总的输出目录和指定交叉编译工具CXX,CC,AR

#编译的所依赖的所有的依赖源文件
ALL_SRCS := \
    $(TARGET_PROGRAM_SRCS) \
    $(PROFILER_SRCS) \
    $(PROFILER_SUMMARIZER_SRCS) \
    $(TF_LITE_CC_SRCS) \
    $(BENCHMARK_LIB_SRCS) \
    $(CMD_LINE_TOOLS_SRCS)

# Where compiled objects are stored.
#编译后的文件存储目录
TARGET_OUT_DIR ?= $(TARGET)_$(TARGET_ARCH)
GENDIR := $(MAKEFILE_DIR)/gen/$(TARGET_OUT_DIR)/ #tensorflow/lite/tools/make/gen/rpi_armv7l
OBJDIR := $(GENDIR)obj/
BINDIR := $(GENDIR)bin/
LIBDIR := $(GENDIR)lib/

#LIBDIR表示最终生成的lib的目录:tensorflow/lite/tools/make/gen/rpi_armv7l/lib

LIB_PATH := $(LIBDIR)$(LIB_NAME)#lib/LIB_NAME := libtensorflow-lite.a
BENCHMARK_LIB := $(LIBDIR)$(BENCHMARK_LIB_NAME)

#BENCHMARK的bin目录tensorflow/lite/tools/make/gen/rpi_armv7l/bin/benchmark_model
BENCHMARK_BINARY := $(BINDIR)$(BENCHMARK_BINARY_NAME)
BENCHMARK_PERF_OPTIONS_BINARY := $(BINDIR)$(BENCHMARK_PERF_OPTIONS_BINARY_NAME)

TF_COMMON_BINARY := $(BINDIR)tf_common

CXX := $(CC_PREFIX)${TARGET_TOOLCHAIN_PREFIX}g++
CC := $(CC_PREFIX)${TARGET_TOOLCHAIN_PREFIX}gcc
AR := $(CC_PREFIX)${TARGET_TOOLCHAIN_PREFIX}ar

6. 根据获取的c和cc文件,得到对应的.o文件列表;最终生成后的.o文件目录为ensorflow/lite/tools/make/gen/linux_x86_64/obj;也就是OBJDIR指向的目录

#将实例程序的cc,c文件替换成o文件,并得到对应的生成路径的位置,生成的结果为
# tensorflow/lite/tools/make/gen/rpi_armv7l/obj/tensorflow/lite/examples/label_image/label_image.o 
# tensorflow/lite/tools/make/gen/rpi_armv7l/obj/tensorflow/lite/examples/label_image/bitmap_helpers.o

TF_COMMON_OBJS := $(addprefix $(OBJDIR), \
$(patsubst %.cc,%.o,$(patsubst %.c,%.o,$(TARGET_PROGRAM_SRCS))))

#TF_LITE依赖的所有文件生成的o文件的地址:
#/tensorflow/lite/tools/make/gen/rpi_armv7l/obj

LIB_OBJS := $(addprefix $(OBJDIR), \
$(patsubst %.cc,%.o,$(patsubst %.c,%.o,$(patsubst %.cpp,%.o,$(TF_LITE_CC_SRCS)))))

#生成BENCHMARK的o文件:tensorflow/lite/tools/make/gen/rpi_armv7l/obj/tensorflow/lite/tools/benchmark/benchmark_main.o
BENCHMARK_MAIN_OBJ := $(addprefix $(OBJDIR), \
$(patsubst %.cc,%.o,$(patsubst %.c,%.o,$(BENCHMARK_MAIN_SRC))))

#生成tensorflow/lite/tools/make/gen/rpi_armv7l/obj/tensorflow/lite/tools/benchmark/benchmark_tflite_performance_options_main.o
BENCHMARK_PERF_OPTIONS_OBJ := $(addprefix $(OBJDIR), \
$(patsubst %.cc,%.o,$(patsubst %.c,%.o,$(BENCHMARK_PERF_OPTIONS_SRC))))

#生成BENCHMARK的依赖包对应的o文件
BENCHMARK_LIB_OBJS := $(addprefix $(OBJDIR), \
$(patsubst %.cc,%.o,$(patsubst %.c,%.o,$(BENCHMARK_LIB_SRCS))))

7. 将所有的c,cc文件生成目标o文件:

  • $@--目标文件,
  • $^--所有的依赖文件,
  • $<--第一个依赖文件

# For normal manually-created TensorFlow Lite C++ source files.
$(OBJDIR)%.o: %.cpp
    @mkdir -p $(dir $@)
    $(CXX) $(CXXFLAGS) $(INCLUDES) -c $< -o $@

$(OBJDIR)%.o: %.cc
    @mkdir -p $(dir $@)
    $(CXX) $(CXXFLAGS) $(INCLUDES) -c $< -o $@

# For normal manually-created TensorFlow Lite C source files.
$(OBJDIR)%.o: %.c
    @mkdir -p $(dir $@)
    $(CC) $(CFLAGS) $(INCLUDES) -c $< -o $@
$(OBJDIR)%.o: %.cpp
    @mkdir -p $(dir $@)
    $(CXX) $(CXXFLAGS) $(INCLUDES) -c $< -o $@

8. 由o文件创建静态库;并有静态库创建可执行文件


# The target that's compiled if there's no command-line arguments.
all: $(LIB_PATH)  $(TF_COMMON_BINARY) $(BENCHMARK_BINARY) $(BENCHMARK_PERF_OPTIONS_BINARY)

# The target that's compiled for micro-controllers
micro: $(LIB_PATH)

# Hack for generating schema file bypassing flatbuffer parsing
tensorflow/lite/schema/schema_generated.h:
    @cp -u tensorflow/lite/schema/schema_generated.h.OPENSOURCE tensorflow/lite/schema/schema_generated.h

# Gathers together all the objects we've compiled into a single '.a' archive.
#使用ar命令将tflite所依赖的所有库文件,编译成一个单独的.a文件,也就是生成静态库文件,后续可以在程序中使用该静态库文件,生成可执行文件
#通过ar命令,将文件生成.a静态库文件,生成命令格式:
#ar rcs libtest.a(要生成的a文件,必须以lib开头) test.o(要转换的o文件)
$(LIB_PATH): tensorflow/lite/schema/schema_generated.h $(LIB_OBJS)
    @mkdir -p $(dir $@)
    $(AR) $(ARFLAGS) $(LIB_PATH) $(LIB_OBJS)

lib: $(LIB_PATH)

#由静态库创建可执行文件
$(TF_COMMON_BINARY): $(TF_COMMON_OBJS) $(LIB_PATH)
    @mkdir -p $(dir $@)
    $(CXX) $(CXXFLAGS) $(INCLUDES) \
    -o $(TF_COMMON_BINARY) $(TF_COMMON_OBJS) \
    $(LIBFLAGS) $(LIB_PATH) $(LDFLAGS) $(LIBS)

tf_common: $(TF_COMMON_BINARY)

$(BENCHMARK_LIB) : $(LIB_PATH) $(BENCHMARK_LIB_OBJS)
    @mkdir -p $(dir $@)
    $(AR) $(ARFLAGS) $(BENCHMARK_LIB) $(LIB_OBJS) $(BENCHMARK_LIB_OBJS)

benchmark_lib: $(BENCHMARK_LIB)

$(BENCHMARK_BINARY) : $(BENCHMARK_MAIN_OBJ) $(BENCHMARK_LIB)
    @mkdir -p $(dir $@)
    $(CXX) $(CXXFLAGS) $(INCLUDES) \
    -o $(BENCHMARK_BINARY) $(BENCHMARK_MAIN_OBJ) \
    $(LIBFLAGS) $(BENCHMARK_LIB) $(LDFLAGS) $(LIBS)

$(BENCHMARK_PERF_OPTIONS_BINARY) : $(BENCHMARK_PERF_OPTIONS_OBJ) $(BENCHMARK_LIB)
    @mkdir -p $(dir $@)
    $(CXX) $(CXXFLAGS) $(INCLUDES) \
    -o $(BENCHMARK_PERF_OPTIONS_BINARY) $(BENCHMARK_PERF_OPTIONS_OBJ) \
    $(LIBFLAGS) $(BENCHMARK_LIB) $(LDFLAGS) $(LIBS)

benchmark: $(BENCHMARK_BINARY) $(BENCHMARK_PERF_OPTIONS_BINARY)

libdir:
    @echo $(LIBDIR)

#定义make clean命令
# Gets rid of all generated files.
clean:
    rm -rf $(MAKEFILE_DIR)/gen

# Gets rid of target files only, leaving the host alone. Also leaves the lib
# directory untouched deliberately, so we can persist multiple architectures
# across builds for iOS and Android.
cleantarget:
    rm -rf $(OBJDIR)
    rm -rf $(BINDIR)

你可能感兴趣的:(IOT设备AI搭建3:TF Lite构建过程解析)