使用tensorflow为Android定制对象检测tflite模型

In this tutorial, we will learn how to make a custom object detection model in TensorFlow and then converting the model to tflite for android. I will go through step by step. Every step is important so don’t miss out on any.

在本教程中,我们将学习如何在TensorFlow中创建自定义对象检测模型,然后将其转换为适用于Android的tflite。 我将逐步进行。 每一步都很重要,因此不要错过任何一步。

There are frequent changes in the Tensorflow Object Detection API and it has a lot of issues as well as. I have tried many combinations and the one which I am posting works for me.

Tensorflow对象检测API经常更改,并且还有很多问题。 我尝试了许多组合,而我要发布的组合对我来说很有效。

1.安装及要求 (1. Installation and requirements)

You can create a virtual environment or simply run the commands without creating one. I recommend you to install Anaconda for Python(≥3.7)( link )

您可以创建一个虚拟环境,也可以只运行命令而不创建一个。 我建议您安装适用于Python的Anaconda(≥3.7)( link )

I am using the Windows 10 operating system. After installing python there are few libraries you have to install. We are first going to install Tensorflow, now I know TensorFlow 2 is already there but I faced some major problems using it for Object Detection. So install Tensorflow1.X preferably version 1.5.0 . You can install it using pip

我正在使用Windows 10操作系统。 安装python之后,您需要安装的库很少。 我们首先要安装Tensorflow,现在我知道TensorFlow 2已经存在,但是在使用它进行对象检测时遇到了一些重大问题。 因此最好安装Tensorflow1.X版本1.5.0 。 您可以使用pip安装它

pip install tensorflow==1.5.0

Other Dependencies

其他依赖

pip install cython

The Tensorflow Object Detection API uses Protobufs to configure model and training parameters. Before the framework can be used, the Protobuf libraries must be downloaded and compiled.

Tensorflow对象检测API使用Protobufs配置模型和训练参数。 在使用该框架之前,必须先下载和编译Protobuf库。

  • Go to protoc release page

    转到协议发布页面

  • Download protoc release ( eg. protoc-3.12.3-win64.zip)

    下载protoc版本(例如protoc-3.12.3-win64.zip )

  • Extract the contents in a new folder in program files(C:/Program Files/protobuf3)

    将内容提取到程序文件中的新文件夹中(C:/ Program Files / protobuf3)
  • Add this folder to the environment variable system path

    将此文件夹添加到环境变量系统路径

Now clone the object detection model from here. I have kept this folder in the D folder as D:/models. Open command prompt in D:/models. And execute the following commands.

现在从这里克隆对象检测模型。 我将此文件夹保存为D:/ models在D文件夹中。 在D:/ models中打开命令提示符。 并执行以下命令。

D:\models> cd researchD:\models\research> protoc object_detection/protos/*.proto --python_out=.

Now you have to set a path variable through the command line.

现在,您必须通过命令行设置路径变量。

set PYTHONPATH=D:\models\research;D:\models\research\slim

Note: Every time you open a new command prompt you have to type the above commands.

注意:每次打开新命令提示符时,都必须键入上述命令。

Now check if everything is working fine by running a test command below

现在,通过运行以下测试命令来检查一切是否正常

In research folder
python object_detection/builders/model_builder_test.py

If everything is fine you will see something like this

如果一切正常,您会看到类似这样的信息

-----------------------------------------------------------
Ran 17 tests in 0.28s
Ok (skipped=1)

2.训练模型 (2. Training the Model)

2.1收集数据并标记它们 (2.1 Collect Data and label them)

First, collect a set of images and store them in folder name images preferably inside model folder.

首先,收集一组图像并将它们存储在文件夹名称图像中,最好是在模型文件夹中。

To label, the image for object detection is a hectic task. But fortunately, there are tools that can ease the process. Install the following tool

为了标记,用于物体检测的图像是繁重的任务。 但幸运的是,有些工具可以简化此过程。 安装以下工具

pip install labelImg
labelImg D:/models/images

You can follow this link on how to annotate images here.

您可以在此处单击此链接以了解如何注释图像。

You will get an XML file for each image. Store the XML in the same location as the images here D:/models/images

您将为每个图像获取一个XML文件。 将XML与图像存储在同一位置D:/ models / images

Now we will make a label _map which will keep all our classes. Make an annotations folder in the models directory and make a label_map.pbtxt file and add the classes by following this structure.

现在,我们将创建一个标签_map,它将保留我们所有的类。 在models目录中创建一个注释文件夹,并创建一个label_map.pbtxt文件,并通过以下结构添加类。

item {
id: 1
name: 'landline'
}
item {
id: 2
name: 'mobile'
}

Now we will convert the images/XML to TensorFlow records. Make a tf_record folder in models, our tf records will go there.

现在,我们将图像/ XML转换为TensorFlow记录。 在模型中创建一个tf_record文件夹,我们的tf记录将保存在该文件夹中。

For converting to tensorflow records you have to run a script download it from here and keep it new folder scripts in models. Run the following command in the models/scripts folder.

为了转换为tensorflow记录,您必须运行脚本从此处下载并将其保留在模型中的新文件夹脚本中。 在models / scripts文件夹中运行以下命令。

python generate_tfrecord.py -x D:models\images -l D:models\annotations\label_map.pbtxt -o D:models\tf_record\train.record

This will generate train.record in the tf_record folder

这将在tf_record文件夹中生成train.record

The folder structure looks like this.

文件夹结构如下所示。

models
-research
-annotations
-label_map.pbtxt
-images
-jpg and xmls
-scripts
-generate_tfrecord.py
-tf_record
-train.record
.
...

2.2预训练模型和转移学习 (2.2 Pre-trained Model and transfer learning)

We will use a pre-trained model and further train the model on our own dataset. For this tutorial, we are using the ssd_mobilenet_v2_quantized model (Note only SSD models can be used with android) Download the pre-trained model from here. Create a folder name checkpoints

我们将使用预先训练的模型,并在我们自己的数据集上进一步训练模型。 在本教程中,我们使用ssd_mobilenet_v2_quantized模型(请注意,只有SSD模型可与android一起使用)从此处下载经过预先​​训练的模型。 创建文件夹名称检查点

Extract the pre-trained model and save the extracted files (model.ckpt.meta, model.ckpt.index, model.ckpt.data-00000-of-00001) in models/checkpoints

提取经过训练的模型并将提取的文件( model.ckpt.meta, model.ckpt.index, model.ckpt.data-00000-of-00001 )保存在模型/检查点中

Now we need the config file for this model. This can be found here

现在我们需要此模型的配置文件。 可以在这里找到

models/research/object_detection/samples/configs

models/research/object_detection/samples/configs

Search for ssd_mobilenet_v2_quantized_300x300_coco.config

搜索ssd_mobilenet_v2_quantized_300x300_coco.config

Copy this and paste this in your models folder. Now we need to edit this config file according to our needs.

复制此内容并将其粘贴到您的模型文件夹中。 现在我们需要根据需要编辑此配置文件。

  • Set number of classes in no_classes=

    no_classes设置类no_classes =

  • fine_tuned_checkpoint = checkpints/model.ckpt

    fine_tuned_checkpoint = checkpints/model.ckpt

  • Set the train directory, For this tutorial, I have skipped the evaluation part.

    设置火车目录,对于本教程,我已跳过评估部分。
train_input_reader: {
tf_record_input_reader {
input_path: "tf_record/train.record"
}
label_map_path: "annotations/label_map.pbtxt"
}

Now it is time to train our datasets, just run the following command in the models folder. Before that make a folder name new_checkpoints where we will store the new checkpoints while training.

现在是时候训练我们的数据集了,只需在models文件夹中运行以下命令即可。 在此之前,创建一个名为new_checkpoints的文件夹,我们将在训练时在其中存储新的检查点。

python D:\models\research\object_detection\legacy\train.py \
--logtostderr \
--train_dir=D:\models\new_checkpoints \
-- pipeline_config_path \ =D:\models\ssd_mobilenet_v2_quantized_300x300_coco.config

If everything works fine you will see the following in the cmd after 100 steps of training

如果一切正常,经过100步训练后,您将在cmd中看到以下内容

INFO:tensorflow:global step 100 loss=2.331

The training can take a long time depending upon the machine processing power. Don’t rush into, let the model train until the loss is less than 2. The model saves checkpoints every 100 steps.

培训可能需要很长时间,具体取决于机器的处理能力。 不要着急,让模型训练,直到损失小于2。模型每100步保存一个检查点。

You will get new checkpoints in the new_checkpoints folder

您将在new_checkpoints文件夹中获得新的检查点。

3.模型导出和Tflite转换 (3. Model export and Tflite conversion)

We will now convert the checkpoints into .pb file. Make a new folder named final_models to store these models.

现在,我们将把检查点转换成.pb文件。 新建一个名为final_models的文件夹以存储这些模型。

Run the following command

运行以下命令

python D:\models\research\object_detection\export_tflite_ssd_graph.py \      
--pipeline_config_path D:\models\ssd_mobilenet_v2_quantized_300x300_coco.config \
--trained_checkpoint_prefix D:\models\new_checkpoints\model.ckpt- \
--output_directory D:\models\final_models --add_postprocessing_op=true

After executing this model you will get two files in the final_models folder tflite_graph.pb and tflite_graph.pdtxt

执行此模型后,您将在final_models文件夹tflite_graph.pbtflite_graph.pdtxt获得两个文件。

Now we will convert this model into tflite by the following command.

现在,我们将通过以下命令将此模型转换为tflite。

tflite_convert --output_file=D:/models/final_model/yourtflite.tflite --graph_def_file=D:/models/final_model/tflite_graph.pb --input_shapes=1,300,300,3 --input_arrays=normalized_input_image_tensor --output_arrays=tflite_detection_postprocess,tflite_detection_postprocess:1,tflite_detection_postprocess:2,tflite_detection_postprocess:3 --inference_type=quantized_uint8 --mean_values=128 --std_dev_values=128 --change_concat_input_ranges=false --allow_custom_ops

That's it you will now get your yourtflite.tflite file in the final_model folder. Now you can deploy this to your android device.

就是这样,您现在将yourtflite.tflite文件保存在final_model文件夹中。 现在,您可以将其部署到您的android设备。

I Will post the Android part of this soon. Stay Tuned.

我将很快发布其中的Android部分。 敬请关注。

Thanks!!

谢谢!!

翻译自: https://medium.com/analytics-vidhya/custom-object-detection-tflite-model-for-android-using-tensorflow-3d76e31afbc4

你可能感兴趣的:(tensorflow,python,人工智能,android,机器学习)