OPENPOSE人体姿态估计课程设计

心路历程:拿到这个题目一脸懵,完全不知道要做什么,尽管模型不需要自己训练(模型来源),可是完全不知道怎么使用,帮助文档好长,看了好久。最后运行了demo后,也不知道这东西有什么用(应该是这东西我有什么是能做出来的。陷入无限百度…)

一、模型下载

下载下来的模型文件中有一个demo,在bin文件夹下,命令行下使用python是openpose的示例。(我下载的模型文件夹)
此外,里面models文件夹里有四个文件,脸预测、手预测、姿态预测(第一个暂时不知道是啥),文件夹里有对应的模型和配置文件,其中姿态预测有三个模型,电脑配置不是很好的建议使用COCO模型,body25模型会比较卡,MPI没试过。
OPENPOSE人体姿态估计课程设计_第1张图片

二、试用demo

看帮助文档openpose/doc/demo_overview.md
OPENPOSE人体姿态估计课程设计_第2张图片
命令行进入openpose目录,按照文档的用法可运行
OPENPOSE人体姿态估计课程设计_第3张图片
注意,这里如果出现一个黑框,然后又消失,OpenPose报错:error == cudaSuccess (2 vs. 0) out of memory,就是电脑配置不够的问题这里有两个方法可以使用

  1. 改用COCO模型
    示例
 bin\OpenPoseDemo.exe --video examples\media\video.avi --model_pose COCO
  1. 修改分辨率(推荐)
 bin\OpenPoseDemo.exe --video examples\media\video.avi --net_resolution 320x176

(分辨率需要16的倍数,最后我使用了160x80,非常流畅)

参考已经解决:OpenPose报错:error == cudaSuccess (2 vs. 0) out of memory

三、使用模型

试用完demo后陷入了不知道做什么的矛盾,由于老师也没说要做什么,所以就继续百度看看别人的案例。

  1. 抖音尬舞案例
    用 Python+openpose 实现抖音尬舞机

这个案例主要是估计出人体姿态后取出各个关节的坐标,通过坐标关系对人体姿态进行判断。我对代码整合和多次尝试后,大概摸透了,他是判断人的手腕坐标高于脖子,则判定为举手。代码由于需要输入,要用命令行运行,在pycharm中运行会失败的。必须指定参数才可以运行,也可以在代码中修改后直接指定。
COCO含有18个节点,代码中用字典BODY_PARTS指定各个节点,其顺序是模型中设定好的。可以百度对应图片出来看一看。POSE_PAIRS是指定两两之间的连线关系。
注意:x轴从左到右,y轴是从上到下。
运行示例:

python my_openpose.py --proto openpose\models\pose\coco\pose_deploy_linevec.prototxt --model openpose\models\pose\coco\pose_iter_440000.caffemodel --dataset COCO --input 'D:/360MoveData/Users/asus/Desktop/machine-learning/openpose-example/openpose/examples/media/12.jpg'

# __author: HY
# date: 2020/7/18
# To use Inference Engine backend, specify location of plugins:
# source /opt/intel/computer_vision_sdk/bin/setupvars.sh
import cv2 as cv
import numpy as np
import argparse

parser = argparse.ArgumentParser(
        description='This script is used to demonstrate OpenPose human pose estimation network '
                    'from https://github.com/CMU-Perceptual-Computing-Lab/openpose project using OpenCV. '
                    'The sample and model are simplified and could be used for a single person on the frame.')
parser.add_argument('--input', help='Path to image or video. Skip to capture frames from camera')
parser.add_argument('--proto', help='Path to .prototxt')
parser.add_argument('--model', help='Path to .caffemodel')
parser.add_argument('--dataset', help='Specify what kind of model was trained. '
                                      'It could be (COCO, MPI, HAND) depends on dataset.')
parser.add_argument('--thr', default=0.1, type=float, help='Threshold value for pose parts heat map')
parser.add_argument('--width', default=368, type=int, help='Resize input to specific width.')
parser.add_argument('--height', default=368, type=int, help='Resize input to specific height.')
parser.add_argument('--scale', default=0.003922, type=float, help='Scale for blob.')

args = parser.parse_args()

if args.dataset == 'COCO':
    BODY_PARTS = { "Nose": 0, "Neck": 1, "RShoulder": 2, "RElbow": 3, "RWrist": 4,
                   "LShoulder": 5, "LElbow": 6, "LWrist": 7, "RHip": 8, "RKnee": 9,
                   "RAnkle": 10, "LHip": 11, "LKnee": 12, "LAnkle": 13, "REye": 14,
                   "LEye": 15, "REar": 16, "LEar": 17, "Background": 18 }

    POSE_PAIRS = [ ["Neck", "RShoulder"], ["Neck", "LShoulder"], ["RShoulder", "RElbow"],
                   ["RElbow", "RWrist"], ["LShoulder", "LElbow"], ["LElbow", "LWrist"],
                   ["Neck", "RHip"], ["RHip", "RKnee"], ["RKnee", "RAnkle"], ["Neck", "LHip"],
                   ["LHip", "LKnee"], ["LKnee", "LAnkle"], ["Neck", "Nose"], ["Nose", "REye"],
                   ["REye", "REar"], ["Nose", "LEye"], ["LEye", "LEar"] ]
elif args.dataset == 'MPI':
    BODY_PARTS = { "Head": 0, "Neck": 1, "RShoulder": 2, "RElbow": 3, "RWrist": 4,
                   "LShoulder": 5, "LElbow": 6, "LWrist": 7, "RHip": 8, "RKnee": 9,
                   "RAnkle": 10, "LHip": 11, "LKnee": 12, "LAnkle": 13, "Chest": 14,
                   "Background": 15 }

    POSE_PAIRS = [ ["Head", "Neck"], ["Neck", "RShoulder"], ["RShoulder", "RElbow"],
                   ["RElbow", "RWrist"], ["Neck", "LShoulder"], ["LShoulder", "LElbow"],
                   ["LElbow", "LWrist"], ["Neck", "Chest"], ["Chest", "RHip"], ["RHip", "RKnee"],
                   ["RKnee", "RAnkle"], ["Chest", "LHip"], ["LHip", "LKnee"], ["LKnee", "LAnkle"] ]
else:
    assert(args.dataset == 'HAND')
    BODY_PARTS = { "Wrist": 0,
                   "ThumbMetacarpal": 1, "ThumbProximal": 2, "ThumbMiddle": 3, "ThumbDistal": 4,
                   "IndexFingerMetacarpal": 5, "IndexFingerProximal": 6, "IndexFingerMiddle": 7, "IndexFingerDistal": 8,
                   "MiddleFingerMetacarpal": 9, "MiddleFingerProximal": 10, "MiddleFingerMiddle": 11, "MiddleFingerDistal": 12,
                   "RingFingerMetacarpal": 13, "RingFingerProximal": 14, "RingFingerMiddle": 15, "RingFingerDistal": 16,
                   "LittleFingerMetacarpal": 17, "LittleFingerProximal": 18, "LittleFingerMiddle": 19, "LittleFingerDistal": 20,
                 }

    POSE_PAIRS = [ ["Wrist", "ThumbMetacarpal"], ["ThumbMetacarpal", "ThumbProximal"],
                   ["ThumbProximal", "ThumbMiddle"], ["ThumbMiddle", "ThumbDistal"],
                   ["Wrist", "IndexFingerMetacarpal"], ["IndexFingerMetacarpal", "IndexFingerProximal"],
                   ["IndexFingerProximal", "IndexFingerMiddle"], ["IndexFingerMiddle", "IndexFingerDistal"],
                   ["Wrist", "MiddleFingerMetacarpal"], ["MiddleFingerMetacarpal", "MiddleFingerProximal"],
                   ["MiddleFingerProximal", "MiddleFingerMiddle"], ["MiddleFingerMiddle", "MiddleFingerDistal"],
                   ["Wrist", "RingFingerMetacarpal"], ["RingFingerMetacarpal", "RingFingerProximal"],
                   ["RingFingerProximal", "RingFingerMiddle"], ["RingFingerMiddle", "RingFingerDistal"],
                   ["Wrist", "LittleFingerMetacarpal"], ["LittleFingerMetacarpal", "LittleFingerProximal"],
                   ["LittleFingerProximal", "LittleFingerMiddle"], ["LittleFingerMiddle", "LittleFingerDistal"] ]


inWidth = args.width
inHeight = args.height
inScale = args.scale

net = cv.dnn.readNet(cv.samples.findFile(args.proto), cv.samples.findFile(args.model))

cap = cv.VideoCapture(args.input if args.input else 0)

while cv.waitKey(1) < 0:
    hasFrame, frame = cap.read()
    if not hasFrame:
        cv.waitKey()
        break

    frameWidth = frame.shape[1]
    frameHeight = frame.shape[0]
    inp = cv.dnn.blobFromImage(frame, inScale, (inWidth, inHeight),
                              (0, 0, 0), swapRB=False, crop=False)
    net.setInput(inp)
    out = net.forward()

    assert(len(BODY_PARTS) <= out.shape[1])

    points = []
    for i in range(len(BODY_PARTS)):
        # Slice heatmap of corresponding body's part.
        heatMap = out[0, i, :, :]

        # Originally, we try to find all the local maximums. To simplify a sample
        # we just find a global one. However only a single pose at the same time
        # could be detected this way.
        _, conf, _, point = cv.minMaxLoc(heatMap)
        x = (frameWidth * point[0]) / out.shape[3]
        y = (frameHeight * point[1]) / out.shape[2]

        # Add a point if it's confidence is higher than threshold.
        points.append((int(x), int(y)) if conf > args.thr else None)

    for pair in POSE_PAIRS:
        partFrom = pair[0]
        partTo = pair[1]
        assert(partFrom in BODY_PARTS)
        assert(partTo in BODY_PARTS)

        idFrom = BODY_PARTS[partFrom]
        idTo = BODY_PARTS[partTo]

        if points[idFrom] and points[idTo]:
            cv.line(frame, points[idFrom], points[idTo], (0, 255, 0), 3)
            cv.ellipse(frame, points[idFrom], (3, 3), 0, 0, 360, (0, 0, 255), cv.FILLED)
            cv.ellipse(frame, points[idTo], (3, 3), 0, 0, 360, (0, 0, 255), cv.FILLED)

    t, _ = net.getPerfProfile()
    freq = cv.getTickFrequency() / 1000
    cv.putText(frame, '%.2fms' % (t / freq), (10, 20), cv.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 0))

	neck = points[BODY_PARTS['Neck']]
    left_wrist = points[BODY_PARTS['LWrist']]
    right_wrist = points[BODY_PARTS['RWrist']]
    print(neck, left_wrist, right_wrist)
    if neck and left_wrist and right_wrist and left_wrist[1] < neck[1] and right_wrist[1] < neck[1]:
        cv.putText(frame, 'HANDS UP!', (10, 100), cv.FONT_HERSHEY_SIMPLEX, 2, (0, 255,0), 2)

    cv.imshow('OpenPose using OpenCV', frame)

运行起来会比较卡,我测试了一下,主要是因为这一句out = net.forward(),花费的时间是其余语句的上万倍。但是实现检测举手的功能。同理很容易拓展出简易功能,比如:举左手(左手手腕y坐标高,右手低)、举右手、交叉手(左右手腕x坐标大小)、交叉脚、弯腿曲手(Z坐标连线的角度低于180°)、张开手(手腕之间x远大于跨)等

COCO对应位置:
OPENPOSE人体姿态估计课程设计_第4张图片
// {0, “Nose”},
// {1, “Neck”},
// {2, “RShoulder”},
// {3, “RElbow”},
// {4, “RWrist”},
// {5, “LShoulder”},
// {6, “LElbow”},
// {7, “LWrist”},
// {8, “RHip”},
// {9, “RKnee”},
// {10, “RAnkle”},
// {11, “LHip”},
// {12, “LKnee”},
// {13, “LAnkle”},
// {14, “REye”},
// {15, “LEye”},
// {16, “REar”},
// {17, “LEar”}
2、25点模型

对应位置:
{0, “Nose”},
{1, “Neck”},
{2, “RShoulder”},
{3, “RElbow”},
{4, “RWrist”},
{5, “LShoulder”},
{6, “LElbow”},
{7, “LWrist”},
{8, “MidHip”},
{9, “RHip”},
{10, “RKnee”},
{11, “RAnkle”},
{12, “LHip”},
{13, “LKnee”},
{14, “LAnkle”},
{15, “REye”},
{16, “LEye”},
{17, “REar”},
{18, “LEar”},
{19, “LBigToe”},
{20, “LSmallToe”},
{21, “LHeel”},
{22, “RBigToe”},
{23, “RSmallToe”},
{24, “RHeel”},

  1. 人脸检测报警器
    基于openpose的动作识别(三)特征点电子围栏

这个是使用models/face/haarcascade_frontalface_alt.xml这个人脸识别的模型,他检测出人脸之后用一个绿色框框住,视频固定位置画了一个紫色检测框,当绿色框左上角进入紫色框范围内时,会给出警告。
这个案例非常有意思,而且实用性感觉很高,所以我重点使用了这个案例。
相同部分我就不赘述了,我对其作出的修改是作出界面来,界面上增加6个按键,用于紫色框位置和大小的调整,其次绿色框左上角进入范围修改成只要两框重合就报警,使用一个声音的包,可以发出警报的声音。

  1. 手势、身体识别
    openpose windows手势(姿态)识别
    ————————————————
    版权声明:本文为CSDN博主「liu24244」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
    原文链接:https://blog.csdn.net/liu24244/article/details/106452542

这个案例第一份代码是对手部检测、第二份是对身体检测。第一份中有一个框,必须要把手放在框内才能检测出手部关节来,看了几遍后,发现是这句代码

[op.Rectangle(0, 0,0, 0),
op.Rectangle(100*rate, 100*rate,300*rate, 300*rate)]

限定了检测区域,将坐标从100,100改成0,0即可检测全部区域。

你可能感兴趣的:(python,人脸识别,手势识别)