TensorFlow - (一)

一、使用TensorFlowlite 构建iOS自定义版本

  • 由于TensorFlowlite 提供的 cocoapods 版本 支持到iOS9;所以需要使用手动编译源码,构建静态包支持iOS8以上系统;
  • TensorFlow目前支持swift和OC,如果想使用直接cococapads集成;

二、在iOS使用C++ api

  • 使用api 之前 需要你准备好lite版的model;model训练完成之后转换为lite即可
  • 加载model的类:

class FlatBufferModel {
 public:
  // Builds a model based on a file.
  // Caller retains ownership of `error_reporter` and must ensure its lifetime
  // is longer than the FlatBufferModel instance.
  // Returns a nullptr in case of failure.
  static std::unique_ptr BuildFromFile(
      const char* filename,
      ErrorReporter* error_reporter = DefaultErrorReporter());
 
  // Builds a model based on a pre-loaded flatbuffer.
  // Caller retains ownership of the buffer and should keep it alive until
  // the returned object is destroyed. Caller also retains ownership of
  // `error_reporter` and must ensure its lifetime is longer than the
  // FlatBufferModel instance.
  // Returns a nullptr in case of failure.
  // NOTE: this does NOT validate the buffer so it should NOT be called on
  // invalid/untrusted input. Use VerifyAndBuildFromBuffer in that case
  static std::unique_ptr BuildFromBuffer(
      const char* caller_owned_buffer, size_t buffer_size,
      ErrorReporter* error_reporter = DefaultErrorReporter());
}
  • C++ 加载model 的步骤

构建FlatBufferModel,并且从构建的FlatBufferModel中初始化Interpreter实例;
可以选择性地进行优化Tensor;
设置Tensor的值;
调用运行推理;
读取Tensor的输出值
使用Interpreter对象注意事项:
Tensor用整数值表示,避免使用字符串进行比较;
访问Interpreter避免在多个线程中并发访问;
Tensor的输入和输出内存分配使用AllocateTensors() 在重置大小之后进行



tflite::FlatBufferModel model(path_to_model);
 
tflite::ops::builtin::BuiltinOpResolver resolver;
std::unique_ptr interpreter;
tflite::InterpreterBuilder(*model, resolver)(&interpreter);
 
// Resize input tensors, if desired.
interpreter->AllocateTensors();
 
float* input = interpreter->typed_input_tensor(0);
// Fill `input`.
 
interpreter->Invoke();
 
float* output = interpreter->typed_output_tensor(0);

你可能感兴趣的:(TensorFlow - (一))