上一篇文章中讲了tflite
模型的建立和Python
端的导入,现在开始在ARM板上的导入。
为了不再重新生成输入数据,我将PC
上python
生成的数据保存到txt
文件。所以稍微改了下上次那个代码:
import numpy as np
import time
import math
import tensorflow as tf
SIZE = 1000
X = np.random.rand(SIZE, 1)
X = X*(math.pi/2.0)
np.savetxt("/home/alcht0/share/project/tensorflow-v1.12.0/tmp.txt", X);
start = time.time()
x1 = tf.placeholder(tf.float32, [SIZE, 1], name='x1-input')
x2 = tf.placeholder(tf.float32, [SIZE, 1], name='x2-input')
y1 = tf.sin(x1)
y2 = tf.sin(x2)
y = y1*y2
with tf.Session() as sess:
init_op = tf.global_variables_initializer()
sess.run(init_op)
converter = tf.lite.TFLiteConverter.from_session(sess, [x1, x2], [y])
tflite_model = converter.convert()
open("/home/alcht0/share/project/tensorflow-v1.12.0/converted_model.tflite", "wb").write(tflite_model)
end = time.time()
print("2nd ", str(end - start))
然后就是要用C++
重写导入的代码。(网络上ARM
版的Tensorflow Lite
的资料真的不多。。搞底层的就是比较苦逼)
其实Tensorflow lite
自己有例子,例子lable_image
是读图的,和我做的不太合适,例子mnist
里面指定输入的地方留白让我们自己加。。导致我在指定输入这里搞了好一会儿。。还是先上代码:
#include "tensorflow/contrib/lite/model.h"
#include "tensorflow/contrib/lite/mutable_op_resolver.h"
#include "tensorflow/contrib/lite/kernels/register.h"
#include
#include // NOLINT(build/include_order)
#include
#include
#define LOG(x) std::cerr
using namespace tflite;
float* arr;
double get_us(struct timeval t) { return (t.tv_sec * 1000000 + t.tv_usec); }
void loadtxt(const char* txt, float* arr)
{
int r;
int n;
FILE *fpRead=fopen(txt,"r");
if(fpRead==NULL)
{
printf("%s File Open Failed\n", txt);
exit(-1);
}
n = 0;
while(1)
{
r = fscanf(fpRead,"%f\n",&arr[n]);
if(r!=1)
{
break;
}
n++;
}
printf("Read %d data from input file\n", n);
fclose(fpRead);
}
void generateinputfromfile(int count)
{
arr = (float*)malloc(count*sizeof(float));
loadtxt("tmp.txt", arr);
}
int main(int argc, char** argv) {
struct timeval start_time, stop_time;
const char* filename = argv[1];
int num_threads = 1;
std::string input_layer_type = "float";
int base_index = 0;
int count = atoi(argv[2]);
if(argc == 4)
{
num_threads = atoi(argv[3]);
}
printf("model size is %d\n", count);
generateinputfromfile(count);
gettimeofday(&start_time, nullptr);
printf("Loading Model File ....\n");
std::unique_ptr model;
model = tflite::FlatBufferModel::BuildFromFile(filename);
if (!model)
{
LOG(FATAL) << "\nFailed to mmap model " << filename << "\n";
exit(-1);
}
printf("Model Loading Complete\n");
std::unique_ptr interpreter;
tflite::ops::builtin::BuiltinOpResolver resolver;
tflite::InterpreterBuilder(*model, resolver)(&interpreter);
if (!interpreter)
{
LOG(FATAL) << "Failed to construct interpreter\n";
exit(-1);
}
printf("Interpreter Construct Complete\n");
if(num_threads != 1)
{
interpreter->SetNumThreads(num_threads);
}
if(interpreter->AllocateTensors() != kTfLiteOk)
{
printf("Failed to allocate tensors\n");
exit(0);
}
for(unsigned i=0;ityped_input_tensor(0)[i] = arr[i];
interpreter->typed_input_tensor(1)[i] = arr[i];
}
if(interpreter->Invoke() != kTfLiteOk)
{
std::printf("Failed to invoke!\n");
exit(0);
}
float* output;
output = interpreter->typed_output_tensor(0);
gettimeofday(&stop_time, nullptr);
printf("Tensorflow Complete time: %f ms\n", (get_us(stop_time) - get_us(start_time))/1000);
free(arr);
return 0;
}
首先是读入tflite
模型文件。
std::unique_ptr model;
model = tflite::FlatBufferModel::BuildFromFile(filename);
然后,由于在ARM上是直接根据tflite
文件来构造计算图得出结果,所以不需要建session
,直接建interpreter
。
std::unique_ptr interpreter;
这个据说是建立算子
tflite::ops::builtin::BuiltinOpResolver resolver;
根据算子来build
interpreter
tflite::InterpreterBuilder(*model, resolver)(&interpreter);
然后,分配tensor
内存
if(interpreter->AllocateTensors() != kTfLiteOk)
接下来就是搞了好久的定义输入,其实很简单。。
for(unsigned i=0;ityped_input_tensor(0)[i] = arr[i];
interpreter->typed_input_tensor(1)[i] = arr[i];
}
主要是之前没搞清楚这个怎么用。还需要注意的是不能直接把arr
的地址指给interpreter->typed_input_tensor
,会报错,报的好像是类型错误。不知道如果memcpy
的话行不行,后面再试试。
后面就是运行和取输出了,没啥。
ARM上面也运行起来了。不过速度真的是不敢恭维。。。在PC
的Python
上做1000次运算的时候Tensorflow
跑的比直接算慢,但是做100000次的时候就已经快很多了。然而在ARM上跑,Tensorflow Lite
比直接算慢了太多太多。而且和输入定义时候的for
循环无关,直接是invoke()
就消耗了很多时间。不清楚是不是Tensorflow Lite
的底层对指令优化的不好,又或者他的图计算优势实在有限,毕竟三角函数的运算C库可是用了很久了,优化什么的肯定是做到最好了。
root@imx6dl-XXX:~/TensorFlowLite/100000# ./test converted_model.tflite 100000
model size is 100000
Read 100000 data from input file
Loading Model File ....
Model Loading Complete
Tensorflow inport modle time: 1.047000 ms
Interpreter Construct Complete
Tensorflow build interpreter time: 3.944000 ms
Tensorflow alloc tensor time: 4.248000 ms
Tensorflow set input time: 9.393000 ms
Tensorflow Complete time: 40.937000 ms
C++ std Complete time: 0.001000 ms
后面再看看算子部分,看看能不能把cos()
加进去。