Tensorflow Lite 笔记(二)

tensorflow lite model maker
metadata which provides a standard for model descriptions
The default model is EfficientNet-Lite0.
Currently, we support several models such as EfficientNet-Lite* models, MobileNetV2, ResNet50 as pre-trained models for image classification
Tensorflow Lite 笔记(二)_第1张图片
此处的模型为 EfficientNet-Lite0
model.tflite 大小为 4.0MB
model_fp16.tflite 大小为 6.8MB

此处的模型为 MobileNetV2
model.tflite 大小为 2.8MB
model_fp16.tflite 大小为 4.6MB

此处的模型为 InceptionV3
model.tflite 大小为 22.4MB
model_fp16.tflite 大小为 43.8MB

Post-training quantization is a conversion technique that can reduce model size and inference latency, while also improving CPU and hardware accelerator inference speed, with a little degradation in model accuracy. Thus, it’s widely used to optimize the model

Image classification with TensorFlow Lite Model Maker 总结:
export formats 的几种形式

The allowed export formats can be one or a list of the following:

  • ExportFormat.TFLITE
  • ExportFormat.LABEL
  • ExportFormat.SAVED_MODEL
    By default, it just exports TensorFlow Lite model with metadata. You can also selectively export different files. For instance, exporting only the label file as follows:
model.export(export_dir='.', export_format=ExportFormat.LABEL)

Customize Post-training quantization on the TensorFLow Lite model

config = QuantizationConfig.for_float16()
model.export(export_dir='.', tflite_filename='model_fp16.tflite', quantization_config=config)

Change to the model that’s supported in this library

model = image_classifier.create(train_data, model_spec=model_spec.get('mobilenet_v2'), validation_data=validation_data)

Change to the model in TensorFlow Hub

inception_v3_spec = image_classifier.ModelSpec(
    uri='https://tfhub.dev/google/imagenet/inception_v3/feature_vector/1')
inception_v3_spec.input_image_shape = [299, 299]

Change your own custom model

你可能感兴趣的:(Neural,Network,Tensorflow,人工智能,tensorflow,深度学习,神经网络)