基于raspberry pi 3B+的OPENVINO实践之三

OPENVINO模型优化

OPENVINO开发环境部署

参考1:https://software.intel.com/en-us/openvino-toolkit/choose-download/free-download-linux
参考2:https://software.intel.com/en-us/articles/OpenVINO-ModelOptimizer
这一部分是使用virtualbox + ubuntu来实现的。

模型来源

https://github.com/movidius/ncappzoo.git
这个repo的模型本来是为测试NCSDK准备的,但是NCS2不支持,拿来主义,OPENVINO也可以使用。

测试一:Gender net

修改deploy.prototxt:
input_dim: 1

mv deploy.prototxt gender_net.prototxt
python3 model_optimizer/mo.py --input_model models/GenderNet/gender_net.caffemodel --output_dir .

生成三个文件,但是我们只需要.xml、.bin文件。

import cv2
import imutils
from imutils.video.pivideostream import PiVideoStream
from imutils.video import FPS
import time

stat = r'../gender/stat.txt'
xml = r'../gender/gender_net.xml'
file = r'../gender/gender_net.bin'
with open(stat,'r') as f:
	lines = f.readlines()
	mean = lines[0].strip().split(" ")
	std = lines[1].strip().split(" ")
mean = [eval(x) for x in mean]
std = [eval(x) for x in std]
print(mean)
print(std)
net = cv2.dnn.readNet(xml, file)
net.setPreferableTarget(cv2.dnn.DNN_TARGET_MYRIAD)
winName = 'gender test'
cv2.namedWindow(winName, cv2.WINDOW_AUTOSIZE)
print("[INFO] sampling THREADED frames from `picamera` module...")
vs = PiVideoStream().start()
time.sleep(2.0)
fps = FPS().start()
# loop over some frames...this time using the threaded stream
while True:
	# grab the frame from the threaded video stream and resize it
	# to have a maximum width of 400 pixels
	frame = vs.read()
	img = frame.copy()
	img[:,:,0] = (img[:,:,0] - mean[0]) * std[0]
	img[:,:,1] = (img[:,:,1] - mean[1]) * std[1]
	img[:,:,2] = (img[:,:,2] - mean[2]) * std[2]
	image = cv2.UMat(imutils.resize(img, width=227))
	blob = cv2.dnn.blobFromImage(image, size=(227,227), ddepth=cv2.CV_8U)
	net.setInput(blob)
	out = net.forward()
	if out[0,0] > out[0,1]:
		cv2.putText(frame,"MAN detected",(10,10),cv2.FONT_HERSHEY_COMPLEX,0.5,(0,0,255),0)
		#cv2.putText(frame,"MAN detected,[INFO] elasped time: {:.2f}  FPS: {:.2f}".format(fps.elapsed(),fps.fps()),(10,10),cv2.FONT_HERSHEY_COMPLEX,6,(0,0,255),25)
	else:
		cv2.putText(frame,"Female detected",(10,10),cv2.FONT_HERSHEY_COMPLEX,0.5,(0,0,255),0)
	cv2.imshow(winName,frame)
	
	key = cv2.waitKey(1) & 0xFF
	# if the `q` key was pressed, break from the loop
	if key == ord("q"):
		break
 
	# update the FPS counter
	fps.update()
 
# stop the timer and display FPS information
fps.stop()
print("[INFO] elasped time: {:.2f}".format(fps.elapsed()))
print("[INFO] approx. FPS: {:.2f}".format(fps.fps()))
 
# do a bit of cleanup
cv2.destroyAllWindows()
vs.stop()

  • 测试结果可能与上位机训练的结果不一致,可能的原因:
    ** 模型转换过程丢失精度?
    ** 没有正确运行,比如图片预处理。这个与DEMO一致。
    ** 训练集合可能不是针对黄种人,造成数据缺口。这个是最可能的原因。
  • 可能的改进措施:
    ** fine tuning

测试二:AgeNet

同上,结果相似,非常不准确。
Any idea?

你可能感兴趣的:(人工智能-神经网络算法,程序员,NCS2,树莓派)