结合yolov3对人进行目标检测,而后对人进行姿势识别,获得人体骨骼关节点。初步实验效果发现,鲁棒性不强易受遮挡影响,但识别的准确率十分可观
1.目标检测
2.判断重心点
3.识别人体骨骼关键点
代码链接:
https://github.com/Algabeno/Human-pose-estimation-on-Yolov3
这是原博主的代码:
https://github.com/mks0601/3DMPPE_POSENET_RELEASE
https://github.com/mks0601/3DMPPE_ROOTNET_RELEASE
torch == 1.2.0
pip install -r requirement.txt
run predict.py
模型:
链接:https://pan.baidu.com/s/1O0pt0fZbjuGKRqXXoDZ2dg
提取码:eo7b
复制这段内容后打开百度网盘手机App,操作更方便哦
将person.pth放在~/model_data
将snapshot_18.pth放在~/ROOTNET/demo
将snapshot_24.pth放在~/POSENET/demo
修改~/ROOTNET/demo/demo.py
# 改为你自己的文件存储路径
sys.path.insert(0, osp.join(r'Your file storage path ', 'main'))
sys.path.insert(0, osp.join(r'Your file storage path', 'data'))
sys.path.insert(0, osp.join(r'Your file storage path', 'common'))
sys.path.insert(0, osp.join(r'Your file storage path', 'nets'))
sys.path.insert(0, osp.join(r'Your file storage path', 'utils1'))
修改~/POSENET/demo/demo1.py
# 改为你自己的文件存储路径
sys.path.insert(0, osp.join(r'Your file storage path', 'main1'))
sys.path.insert(0, osp.join(r'Your file storage path', 'data1'))
sys.path.insert(0, osp.join(r'Your file storage path', 'common'))
sys.path.insert(0, osp.join(r'Your file storage path', 'utils'))
在predict文件里,在如下部分修改:
cam = cv2.VideoCapture('path/vedio.mp4')
使要显示的图片为白底图片或者原图
for n in range(person_num):
vis_kps = np.zeros((3, joint_num))
vis_kps[0, :] = output_pose_2d_list[n][:, 0]
vis_kps[1, :] = output_pose_2d_list[n][:, 1]
vis_kps[2, :] = 1
vis_img = vis_keypoints(vis_img, vis_kps, skeleton) # 修改要显示的图片为原图
# vis_img = vis_keypoints(white, vis_kps, skeleton) # 修改要显示的图片为白底图片,
选择保存视频的目录
vout_1.open('./output.mp4', fourcc, fps, sz, True)