使用bazel构建项目,包含如何引入外部库(项目中引入了opencv和编译的tensorflow lite库),如何编译成动态库和静态库,以及如何调用编译好的库。
项目根目录的所有文件结构如下图所示。项目的构建由图中红色框起来的文件描述,其中蓝色的方框是生成的静态库和动态库(它们是从其他目录复制过来的)
首先,构建WORKSPACE,即确定外部需要引入的库,如下:
# 确定外部的opencv库路径
new_local_repository(
name = "opencv4",
path = "/usr/local", # opencv安装的目录
build_file = "opencv4.BUILD",
)
# 确定外部的tensorflow lite库路径
new_local_repository(
name = "tensorflow_lite",
path = "/home/chentao/tensorflowlite", # tensorflow lite编译成功所在目录
build_file = "tensorflowLite.BUILD",
)
下图是我这边opencv和tensorflow lite库目录结构,看实际图片更容易理解(文件太多了,用tree显示更不好看,手画的将就看看)。
然后再构建WORKSPACE中提及到的opencv4.BUILD和tensorflowLite.BUILD文件,这里容易出错。为了更好理解可以结合上述图片仔细理清路径问题。
opencv4.BUILD文件:
# 构建opencv外部引入目标
cc_library(
name = "opencv4",
srcs = glob(["lib/libopencv*.so*"]),
hdrs = glob(["include/opencv4/opencv2/**/*.h*"]),
includes = ["include/opencv4/"],
visibility = ["//visibility:public"],
linkstatic = 0, # 默认值是1,表示链接的是静态库,否则就是动态库(有疑问不确定)
)
tensorflowLite.BUILD文件:
# 构建tensorflow lite外部引入库目标
cc_library(
name = "tensorflow_lite",
srcs = glob(["libs/libtensorflowlite.so"]),
hdrs = glob(["include/**/*.h"]), # 等同于glob(["include/tensorflow/lite/**/*.h"]) + glob(["include/flatbuffers/**/*.h"])
includes = ["include/"],
visibility = ["//visibility:public"],
linkstatic = 0,
)
至此,外部的库文件已经确定好了,现在可以直接使用它们了。
这个文件的BUILD文件包含了使用opencv和tensorflow lite库,编译成动态、静态库等等目标。提供常规使用的语法很具有参考意义。详细内容看注释。
# 构建facepose目标
cc_library(
name = "facepose",
srcs = glob(["*.cpp"]), # 遍历当前目录所有.cpp文件
hdrs = glob(["*.h*"]), # 遍历当前目录所有含有.h字符的文件
deps = [
"@tensorflow_lite//:tensorflow_lite",
"@opencv4//:opencv4",
],
visibility = ["//visibility:public"], # 完全公开,本worksapce及其他workspace都可以访问
#visibility = ["//visibility:private"], # 私有,仅同一package内可访问
#visibility = ["@my_repo//foo/bar:__pkg__"], # 表示指定的my_repo项目的//foo/bar的package内所有Targets可以访问
#visibility = ["//foo/bar:__subpackages__"], # 表示指定的//foo/bar的package及其内部所有package的所有Targets可以访问
)
# 构建facepose动态库目标
cc_binary(
name = "libfacepose.so", # name一定要命名成lib*.so形式,否则会出现gcc定位错误(查到资料看到的,未验证)
srcs = glob(["*.cpp"]) + glob(["*.h*"]),
# hdrs = glob(["*.h*"]), # cc_binary中没有hdrs属性
deps = [
"@tensorflow_lite//:tensorflow_lite",
"@opencv4//:opencv4",
],
linkshared=True, # 由这个属性指定生成动态库目标文件
visibility = ["//visibility:public"],
)
# 构建facepose静态库目标
cc_library(
name = "faceposestatic", # 生成结果会自动在前面加上lib,在后面加上.a
srcs = glob(["*.cpp"]),
hdrs = glob(["*.h*"]),
deps = [
"@tensorflow_lite//:tensorflow_lite",
"@opencv4//:opencv4",
],
linkstatic=True, # 由这个属性指定生成静态库目标文件
visibility = ["//visibility:public"],
)
# 构建facepose头文件目标
cc_library(
name = "faceposeHeader",
hdrs = glob(["*.h*"]),
visibility = ["//visibility:public"],
)
# 将当前目录的所有含有.h文件变成可以看见属性
#exports_files(glob(["*.h*"]))
该目录是如果用bazel编译成可执行文件,以及如何使用动/静态库等等。详细内容看注释。
# 构建引用facepose动态库目标:如果在showAll目标中使用,需要先在程序根目录运行:
# bazel build //src:libfacepose.so命令,将生成的.so文件拷贝到对应路径(参考libfacepose.so的路径)
cc_import(
name="dllfacepose",
shared_library="libfacepose.so",
deps = ["//src:faceposeHeader"],
)
# 构建引用facepose静态库目标:如果在showAll目标中使用,需要先在程序根目录运行:
# bazel build //src:faceposestatic命令,将生成的.a文件拷贝到对应路径(参考libfaceposestatic.a的路径)
cc_import(
name="staticFacepose",
static_library="libfaceposestatic.a",
deps = ["//src:faceposeHeader"],
# 文件所在BUILD中添加了exports_files(glob(["*.h*"])),下面属性和deps属性作用相同
#hdrs = ["@//src:DetectionPostProcess.hpp","@//src:FaceDetection.hpp",其他头文件也要加],
)
# 构建可执行程序目标
cc_binary(
name = "showAll",
srcs = ["showAll.cpp"],
deps = [
"//src:facepose", # 引入其他Packages的构建的目标
#":dllfacepose", # 使用自己编译的动态库文件
#":staticFacepose", # 使用自己编译的静态库文件。上述三种只须选其中一种即可
"@opencv4//:opencv4", # 引入外部opencv库
"@tensorflow_lite//:tensorflow_lite"
],
linkopts = ["-lpthread"], # 链接多线程库
copts = ["-Isrc"], # 添加src路径:不添加,则showAll.cpp文件中就需要#include "src/IrisLandmark.hpp"方式
#copts = ["-std=c++14"]+["-Iinclude"], # 指定C++14标准编译,并在当前或者依赖包的include目录下寻找头文件
)
# 构建可执行文件,用于测试opencv
cc_binary(
name = "test_opencv",
srcs = ["test_opencv.cpp"],
deps = [
"@opencv4//:opencv4",
],
)
下面所有命令都是在根目录下的终端执行,即和WORKSPACE同目录。
最简单的工程是用test_opencv.cpp测试opencv调用是否成功。命令如下:
# 编译
bazel build //demo:test_opencv
# 执行
./bazel-bin/demo/test_opencv
编译动/静态的命令,如下所示:
# 动态库
bazel build //src:libfacepose.so
# 静态库
bazel build //src:faceposestatic
编译可调用opencv和tensorflow lite库文件的可执行文件,命令如下:
# 编译
bazel build //demo:showAll
# 执行
./bazel-bin/demo/showAll
贴出test_opencv.cpp和showAll.cpp文件,里面有#include注意项,被坑过。
test_opencv.cpp
#include
#include "opencv2/highgui.hpp"
int main(int argc, char* argv[]) {
cv::VideoCapture cap("../face_pose.avi");
bool success = cap.isOpened();
if (success == false)
{
std::cerr << "Cannot open the camera." << std::endl;
return 1;
}
while (success)
{
cv::Mat rframe, frame;
success = cap.read(rframe); // read a new frame from video
if (success == false)
break;
cv::imshow("Face detector", rframe);
if (cv::waitKey(1) == 27)
break;
}
cap.release();
cv::destroyAllWindows();
return 0;
}
showAll.cpp
#include
#include "opencv2/highgui.hpp"
#include "IrisLandmark.hpp" // 这里注意是否是 #include "src/IrisLandmark.hpp" ,看编译选项是否有copts = ["-Isrc"]
#define SHOW_FPS (1)
#if SHOW_FPS
#include
#endif
int main(int argc, char* argv[]) {
my::IrisLandmark irisLandmarker("./models");
cv::VideoCapture cap("./face_pose.avi");
bool success = cap.isOpened();
if (success == false)
{
std::cerr << "Cannot open the camera." << std::endl;
return 1;
}
#if SHOW_FPS
float sum = 0;
int count = 0;
#endif
while (success)
{
cv::Mat rframe, frame;
success = cap.read(rframe); // read a new frame from video
if (success == false)
break;
cv::flip(rframe, rframe, 1);
#if SHOW_FPS
auto start = std::chrono::high_resolution_clock::now();
#endif
irisLandmarker.loadImageToInput(rframe);
irisLandmarker.runInference();
for (auto landmark: irisLandmarker.getAllFaceLandmarks()) {
cv::circle(rframe, landmark, 2, cv::Scalar(0, 255, 0), -1);
}
for (auto landmark: irisLandmarker.getAllEyeLandmarks(true, true)) {
cv::circle(rframe, landmark, 2, cv::Scalar(0, 0, 255), -1);
}
for (auto landmark: irisLandmarker.getAllEyeLandmarks(false, true)) {
cv::circle(rframe, landmark, 2, cv::Scalar(0, 0, 255), -1);
}
#if SHOW_FPS
auto stop = std::chrono::high_resolution_clock::now();
auto duration = std::chrono::duration_cast(stop - start);
float inferenceTime = duration.count() / 1e3;
sum += inferenceTime;
count += 1;
int fps = (int) 1e3/ inferenceTime;
cv::putText(rframe, std::to_string(fps), cv::Point(20, 70), cv::FONT_HERSHEY_PLAIN, 3, cv::Scalar(0, 196, 255), 2);
#endif
cv::imshow("Face detector", rframe);
if (cv::waitKey(10) == 27)
break;
}
#if SHOW_FPS
std::cout << "Average inference time: " << sum / count << "ms " << std::endl;
#endif
cap.release();
cv::destroyAllWindows();
return 0;
}