HyperLPR中文车牌识别

原文出处: https://blog.xugaoxiang.com/ai/hyperlpr.html

软硬件环境

  • Intel® Xeon® CPU E5-1607 v4 @ 3.10GHz
  • GTX 1070 Ti 32G
  • ubuntu 18.04 64bit
  • anaconda with python 3.6
  • tensorflow-gpu
  • keras
  • opencv 3.4.3

HyperLPR简介

HyperLPR是一个基于深度学习的高性能中文车牌识别开源项目,地址是 https://github.com/zeusees/HyperLPR,由python语言编写,同时还支持LinuxAndroidiOSWindows等各主流平台。它拥有不错的识别率,目前已经支持的车牌类型包括

  • 单行蓝牌
  • 单行黄牌
  • 新能源车牌
  • 白色警用车牌
  • 使馆/港澳车牌
  • 教练车牌

HyperLPR的检测流程

  • 使用opencvHAAR Cascade检测车牌大致位置
  • Extend检测到的大致位置的矩形区域
  • 使用类似于MSER的方式的多级二值化和RANSAC拟合车牌的上下边界
  • 使用CNN Regression回归车牌左右边界
  • 使用基于纹理场的算法进行车牌校正倾斜
  • 使用CNN滑动窗切割字符
  • 使用CNN识别字符

HyperLPR安装

git clone https://github.com/zeusees/HyperLPR.git
cd HyperLPR

项目同时支持python2python3,但是在目录结构上有所区分,hyperlprheperlpr_py3,我的环境是python3anaconda,直接将hyperlpr_py3文件夹拷贝到~/anaconda3/lib/python3.6/site-packages/就可以了

测试效果

图片
from hyperlpr_py3 import pipline as pp
import cv2
import click

@click.command()
@click.option('--image', help = 'input image')
def main(image):
    img = cv2.imread(image)
    img,res = pp.SimpleRecognizePlateByE2E(img)
    print(res)

if __name__ == '__main__':
    main()

使用项目自带的测试图片进行测试,由于拍摄角度原因,某些识别结果是错误的。

longjing@FR:~/Work/gogs/LPR$ python test_image.py --image demo_images/demo1.png 
Using TensorFlow backend.
2018-12-18 15:28:27.628782: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2018-12-18 15:28:27.765931: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties: 
name: GeForce GTX 1070 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.683
pciBusID: 0000:03:00.0
totalMemory: 7.93GiB freeMemory: 7.15GiB
2018-12-18 15:28:27.765967: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
2018-12-18 15:28:28.030061: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-12-18 15:28:28.030097: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988]      0 
2018-12-18 15:28:28.030105: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0:   N 
2018-12-18 15:28:28.030306: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6899 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070 Ti, pci bus id: 0000:03:00.0, compute capability: 6.1)
res 闽R6G81
川01C 0.6178697198629379
res 1035
吉晋K03 0.5994847059249878
res K032301
K030X 0.824301564693451
res JK0330
贵晋JK0330 0.9602108970284462
res 闽CR8W
CRM1 0.6328456625342369
res 1NX888
云A赣X881 0.5053929431097848
res 桂ANX889
桂ANX889 0.984427673476083
res 贵JD1687
贵JD1687 0.9756925020899091
res 贵JC3732
贵JC3732 0.8844872457640511
res 1T687
L87 0.6002845267454783
[[[], '川01C', 0.6178697198629379], [[], '吉晋K03', 0.5994847059249878], [[], 'K030X', 0.824301564693451], [[], '贵晋JK0330', 0.9602108970284462], [[], 'CRM1', 0.6328456625342369], [[], '云A赣X881', 0.5053929431097848], [[], '桂ANX889', 0.984427673476083], [[], '贵JD1687', 0.9756925020899091], [[], '贵JC3732', 0.8844872457640511], [[], 'L87', 0.6002845267454783]
视频文件

本质上和图片类似,我们还是利用opencv的接口

# -*- coding: utf-8 -*-
# @time    : 18-12-18 下午3:05
# @author  : xugaoxiang
# @email   : [email protected]
# @website : https://xugaoxiang.com
# @file    : test_video_file.py
# @software: PyCharm
# python test_video_file.py --video test.mp4

import time
import cv2
from hyperlpr_py3 import pipline as pp
import click


@click.command()
@click.option('--video', help = 'input video file')
def main(video):
    print("[INFO] starting video stream...")
    # vs = VideoStream(src=0).start()
    stream = cv2.VideoCapture(video)
    time.sleep(2.0)

    while True:
        # grab the frame from the threaded video stream
        grabbed, frame = stream.read()
        if not grabbed:
            print('No data, break.')
            break

        _, res = pp.SimpleRecognizePlate(frame)

        # convert the input frame from BGR to RGB then resize it to have
        # a width of 750px (to speedup processing)
        # rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
        # rgb = imutils.resize(frame, width = 750)
        # r = frame.shape[1] / float(rgb.shape[1])

        cv2.putText(frame, str(res), (50, 50), cv2.FONT_HERSHEY_SIMPLEX, 0.75, (0, 255, 255), 2)

        cv2.imshow("Frame", frame)
        key = cv2.waitKey(1) & 0xFF

        # if the `q` key was pressed, break from the loop
        if key == ord("q"):
            break

    # do a bit of cleanup
    cv2.destroyAllWindows()
    stream.release()


if __name__ == '__main__':
    main()

程序运行打印如下,识别率还是可以的,但是由于识别检测本身比较耗时,所以从画面上看,就像慢速播放一样,这里需要上层应用根据自己的需要进行丢帧处理

longjing@FR:~/Work/gogs/LPR$ python test_video_file.py --video ~/backup/lpr/2s_lpr.mp4 
Using TensorFlow backend.
2018-12-18 16:00:17.067081: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2018-12-18 16:00:17.203116: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties: 
name: GeForce GTX 1070 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.683
pciBusID: 0000:03:00.0
totalMemory: 7.93GiB freeMemory: 7.12GiB
2018-12-18 16:00:17.203152: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
2018-12-18 16:00:17.471942: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-12-18 16:00:17.471983: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988]      0 
2018-12-18 16:00:17.471991: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0:   N 
2018-12-18 16:00:17.472190: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6875 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070 Ti, pci bus id: 0000:03:00.0, compute capability: 6.1)
[INFO] starting video stream...
车牌: 沪B28600 相似度: 0.9864972574370248
车牌: 沪B28600 相似度: 0.9910101975713458
车牌: 沪B28600 相似度: 0.9908801657812936
车牌: 沪B28600 相似度: 0.9916305116244725
...
...
...

IPC

跟视频文件差不多,直接上代码了

# -*- coding: utf-8 -*-
# @time    : 18-12-18 下午3:05
# @author  : xugaoxiang
# @email   : [email protected]
# @website : https://xugaoxiang.com
# @file    : test_ipc.py
# @software: PyCharm
# python test_ipc.py --video rtsp://admin:[email protected]:554/ISAPI/streaming/channels/101

from imutils.video import VideoStream
import time
import cv2
from hyperlpr_py3 import pipline as pp
import click

@click.command()
@click.option('--video', help = 'input video, ipcamera url or usb camera index.')
def main(video):
   print("[INFO] starting video stream...")
   # vs = VideoStream(src=0).start()
   vs = VideoStream(video).start()
   time.sleep(2.0)

   while True :
       # grab the frame from the threaded video stream
       frame = vs.read()

       img, res = pp.SimpleRecognizePlate(frame)
       print(res)

       # convert the input frame from BGR to RGB then resize it to have
       # a width of 750px (to speedup processing)
       # rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
       # rgb = imutils.resize(frame, width = 750)
       # r = frame.shape[1] / float(rgb.shape[1])

       cv2.putText(frame, str(res), (50, 50), cv2.FONT_HERSHEY_SIMPLEX, 0.75, (0, 255, 255), 2)

       cv2.imshow("Frame", frame)
       key = cv2.waitKey(1) & 0xFF

       # if the `q` key was pressed, break from the loop
       if key == ord("q") :
           break

   # do a bit of cleanup
   cv2.destroyAllWindows()
   vs.stop()

if __name__ == '__main__':
   main()
图形化的检测工具

热心的开发者在原有的框架基础之上,使用Qt给它套了个漂亮的外壳,方便那些不习惯使用命令行的童鞋,非常好

Q & A

Q1
module 'tensorflow.python.training.checkpointable' has no attribute 'CheckpointableBase'

或者

longjing@FR:~/Work/gogs/LPR$ python test_video_file.py 
Using TensorFlow backend.
Traceback (most recent call last):
  File "test_video_file.py", line 12, in <module>
    from hyperlpr_py3 import pipline as pp
  File "/home/longjing/Work/gogs/LPR/hyperlpr_py3/pipline.py", line 5, in <module>
    from . import segmentation
  File "/home/longjing/Work/gogs/LPR/hyperlpr_py3/segmentation.py", line 16, in <module>
    from keras.models import Sequential
  File "/home/longjing/anaconda3/lib/python3.6/site-packages/keras/__init__.py", line 3, in <module>
    from . import utils
  File "/home/longjing/anaconda3/lib/python3.6/site-packages/keras/utils/__init__.py", line 6, in <module>
    from . import conv_utils
  File "/home/longjing/anaconda3/lib/python3.6/site-packages/keras/utils/conv_utils.py", line 9, in <module>
    from .. import backend as K
  File "/home/longjing/anaconda3/lib/python3.6/site-packages/keras/backend/__init__.py", line 89, in <module>
    from .tensorflow_backend import *
  File "/home/longjing/anaconda3/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 6, in <module>
    from tensorflow.python.framework import ops as tf_ops
ModuleNotFoundError: No module named 'tensorflow.python.framework'
A1 重新安装tensorflow-gpu

如果没有GPU就安装CPU的版本

pip install --upgrade tensorflow-gpu
Q2

安装了tensorflow-gpu,启动有提示缺少模块

longjing@FR:~/Work/github/HyperLPR$ python demo_py3.py 
Using TensorFlow backend.
Traceback (most recent call last):
  File "demo_py3.py", line 41, in <module>
    import HyperLPRLite as pr
  File "/home/longjing/Work/github/HyperLPR/HyperLPRLite.py", line 4, in <module>
    from keras import backend as K
  File "/home/longjing/anaconda3/lib/python3.6/site-packages/keras/__init__.py", line 3, in <module>
    from . import utils
  File "/home/longjing/anaconda3/lib/python3.6/site-packages/keras/utils/__init__.py", line 6, in <module>
    from . import conv_utils
  File "/home/longjing/anaconda3/lib/python3.6/site-packages/keras/utils/conv_utils.py", line 9, in <module>
    from .. import backend as K
  File "/home/longjing/anaconda3/lib/python3.6/site-packages/keras/backend/__init__.py", line 89, in <module>
    from .tensorflow_backend import *
  File "/home/longjing/anaconda3/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 5, in <module>
    import tensorflow as tf
  File "/home/longjing/anaconda3/lib/python3.6/site-packages/tensorflow/__init__.py", line 24, in <module>
    from tensorflow.python import pywrap_tensorflow  # pylint: disable=unused-import
  File "/home/longjing/anaconda3/lib/python3.6/site-packages/tensorflow/python/__init__.py", line 88, in <module>
    from tensorflow.python import keras
  File "/home/longjing/anaconda3/lib/python3.6/site-packages/tensorflow/python/keras/__init__.py", line 24, in <module>
    from tensorflow.python.keras import activations
  File "/home/longjing/anaconda3/lib/python3.6/site-packages/tensorflow/python/keras/activations/__init__.py", line 22, in <module>
    from tensorflow.python.keras._impl.keras.activations import elu
  File "/home/longjing/anaconda3/lib/python3.6/site-packages/tensorflow/python/keras/_impl/keras/__init__.py", line 21, in <module>
    from tensorflow.python.keras._impl.keras import activations
  File "/home/longjing/anaconda3/lib/python3.6/site-packages/tensorflow/python/keras/_impl/keras/activations.py", line 23, in <module>
    from tensorflow.python.keras._impl.keras import backend as K
  File "/home/longjing/anaconda3/lib/python3.6/site-packages/tensorflow/python/keras/_impl/keras/backend.py", line 37, in <module>
    from tensorflow.python.layers import base as tf_base_layers
  File "/home/longjing/anaconda3/lib/python3.6/site-packages/tensorflow/python/layers/base.py", line 25, in <module>
    from tensorflow.python.keras.engine import base_layer
  File "/home/longjing/anaconda3/lib/python3.6/site-packages/tensorflow/python/keras/engine/__init__.py", line 23, in <module>
    from tensorflow.python.keras.engine.base_layer import InputSpec
  File "/home/longjing/anaconda3/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py", line 35, in <module>
    from tensorflow.python.keras import backend
  File "/home/longjing/anaconda3/lib/python3.6/site-packages/tensorflow/python/keras/backend/__init__.py", line 22, in <module>
    from tensorflow.python.keras._impl.keras.backend import abs
ImportError: cannot import name 'abs'
A2

卸载tensorflow-gpuprotobuf,然后重新安装tensorflow-gpu,在安装tensorflow-gpu的过程中pip会自动帮你安装protobuf

pip uninstall tensorflow-gpu
pip uninstall protobuf
pip install --upgrade tensorflow-gpu

参考资料

  • https://github.com/zeusees/HyperLPR
  • https://www.tensorflow.org/install/pip
  • https://github.com/tensorflow/tensorflow/issues/20778
  • https://pypi.org/project/protobuf/

你可能感兴趣的:(人工智能)