前言
为了减小以后项目的开发效率,本次实验将OpenNI底层驱动Kinect,OpenCV初步处理OpenNI获得的原始数据,以及手势识别中的分割(因为本系统最后是开发手势识别的)这3个部分的功能单独做成类,以便以后移植和扩展。其实在前面已经有不少文章涉及到了这3部分的设计,比如说:Kinect+OpenNI学习笔记之3(获取kinect的数据并在Qt中显示的类的设计), Kinect+OpenNI学习笔记之11(OpenNI驱动kinect手势相关的类的设计), Kinect+OpenNI学习笔记之12(简单手势所表示的数字的识别) 。这次是综合前面几次的设计,优化了下这几个类。
开发环境:开发环境:QtCreator2.5.1+OpenNI1.5.4.0+Qt4.8.2+OpenCV2.4.3
实验基础
OPenNI/OPenCV知识点总结:
Kinect驱动类,OpenCV显示类和手部预分割类这3个类,单独来设计其实参考了前面的博文还是很简单的,但是由于这3个类之间有相互联系,设计不好就会出现图像显示非常卡。这时候,需要注意下面几点问题(在本程序代码中,Kinect驱动类为COpenniHand,OpenCV显示类为CKinectOpenCV, 手部预分割类为CKinectHandSegment):
因为在kinect驱动类中有完整的kinect驱动程序(这个驱动会占用一部分时间的),而OpenCV显示类调用了Kinect驱动类中的内容,相当于完成了一次Kinect驱动完整过程,这时候,因为在手部预分割过程中,要获得手部的中心点,如果在该类中再次执行kinect的驱动来获得该中心点,那么整个系统中一个流程的时间其kinect需要驱动两次,这会浪费很多系统资源,导致图像显示不流畅等。因此我们应该在OpenCV显示类中就返回Kinect驱动类中能够返回的值,比如说手部中心点的位置
在CKinectOpenCV类中由于要返回手部的中心点位置,本打算在类内部公共部分设置一个获取手部中心点位置的函数的,但是发现如果这个函数的返回值是map类型时,运行时老出错误(理论上应该是不会出错的),所以后面该为直接返回手部中心点的变量(map类型),但是在这个变量返回前要保证它的值是实时更新的,所以应该在返回前加入kinect驱动程序中的Updata函数,我这里将其设计成了一个开关函数,即如果允许获取手部中心点,就将开关函数中的参数设置为ture,具体参见代码部分。
C/C++知识点总结:
定义类的对象并使用该对象后,一般会先调用该类的初始化函数,该函数的作用一般是为类成员变量进行一些初始设置,但如果类中其它函数的调用前每次都初始化某些变量时,这些变量的初始化不宜放在类的初始化函数中,而应该单独给个私有函数,在那些需要调用它的函数前面进行被调用,达到初始化某些变量的目的。
类的设计的目的一是为了方便,而是为了提高效率,有时候不能够光为了方便而去设计,比如说在本次类设计中要获得分割好了的图像,或者原始图像,或者深度图像等等,确实是可以直接使用一个函数每一幅图像,不过每次获得图像就要更新一个下kinect驱动中的数据,因此这样的效率就非常低了,在实际设计中,我把那些kinect驱动设备的程序写在了一个函数中,但是这个函数又不能被获取图像的每个函数去调用,否则还是相当于驱动了多次,因此只能由类所定义的对象来调用了。结果是每一个主函数循环中,我们在定义了类的对象后多调用一个函数,再去获得所需的图像,这样只是多了一句代码,却节省了不少时间消耗。
实验结果
本次实验完成的功能依旧是获取kinect的深度图,颜色图,手势分割图,手势轮廓图等。下面是手势分割图和轮廓处理图的结果:
实验代码及注释:
copennihand.h:
#ifndef COpenniHand_H #define COpenniHand_H #include <XnCppWrapper.h> #include <iostream> #include <vector> #include <map> using namespace xn; using namespace std; class COpenniHand { public: COpenniHand(); ~COpenniHand(); /*OpenNI的内部初始化,属性设置*/ bool Initial(); /*启动OpenNI读取Kinect数据*/ bool Start(); /*更新OpenNI读取到的数据*/ bool UpdateData(); /*得到色彩图像的node*/ ImageGenerator& getImageGenerator(); /*得到深度图像的node*/ DepthGenerator& getDepthGenerator(); /*得到手势姿势的node*/ GestureGenerator& getGestureGenerator(); /*得到手部的node*/ HandsGenerator& getHandGenerator(); DepthMetaData depth_metadata_; //返回深度图像数据 ImageMetaData image_metadata_; //返回彩色图像数据 std::map<XnUserID, XnPoint3D> hand_points_; //为了存储不同手的实时点而设置的 std::map< XnUserID, vector<XnPoint3D> > hands_track_points_; //为了绘画后面不同手部的跟踪轨迹而设定的 private: /*该函数返回真代表出现了错误,返回假代表正确*/ bool CheckError(const char* error); /*表示某个手势动作已经完成检测的回调函数*/ static void XN_CALLBACK_TYPE CBGestureRecognized(xn::GestureGenerator &generator, const XnChar *strGesture, const XnPoint3D *pIDPosition, const XnPoint3D *pEndPosition, void *pCookie); /*表示检测到某个手势开始的回调函数*/ static void XN_CALLBACK_TYPE CBGestureProgress(xn::GestureGenerator &generator, const XnChar *strGesture, const XnPoint3D *pPosition, XnFloat fProgress, void *pCookie); /*手部开始建立的回调函数*/ static void XN_CALLBACK_TYPE HandCreate(HandsGenerator& rHands, XnUserID xUID, const XnPoint3D* pPosition, XnFloat fTime, void* pCookie); /*手部开始更新的回调函数*/ static void XN_CALLBACK_TYPE HandUpdate(HandsGenerator& rHands, XnUserID xUID, const XnPoint3D* pPosition, XnFloat fTime, void* pCookie); /*手部销毁的回调函数*/ static void XN_CALLBACK_TYPE HandDestroy(HandsGenerator& rHands, XnUserID xUID, XnFloat fTime, void* pCookie); XnStatus status_; Context context_; XnMapOutputMode xmode_; ImageGenerator image_generator_; DepthGenerator depth_generator_; GestureGenerator gesture_generator_; HandsGenerator hand_generator_; }; #endif // COpenniHand_H
copennihand.cpp:
#include "copennihand.h" #include <XnCppWrapper.h> #include <iostream> #include <map> using namespace xn; using namespace std; COpenniHand::COpenniHand() { } COpenniHand::~COpenniHand() { } bool COpenniHand::Initial() { status_ = context_.Init(); if(CheckError("Context initial failed!")) { return false; } context_.SetGlobalMirror(true);//设置镜像 xmode_.nXRes = 640; xmode_.nYRes = 480; xmode_.nFPS = 30; //产生颜色node status_ = image_generator_.Create(context_); if(CheckError("Create image generator error!")) { return false; } //设置颜色图片输出模式 status_ = image_generator_.SetMapOutputMode(xmode_); if(CheckError("SetMapOutputMdoe error!")) { return false; } //产生深度node status_ = depth_generator_.Create(context_); if(CheckError("Create depth generator error!")) { return false; } //设置深度图片输出模式 status_ = depth_generator_.SetMapOutputMode(xmode_); if(CheckError("SetMapOutputMdoe error!")) { return false; } //产生手势node status_ = gesture_generator_.Create(context_); if(CheckError("Create gesture generator error!")) { return false; } /*添加手势识别的种类*/ gesture_generator_.AddGesture("Wave", NULL); gesture_generator_.AddGesture("click", NULL); gesture_generator_.AddGesture("RaiseHand", NULL); gesture_generator_.AddGesture("MovingHand", NULL); //产生手部的node status_ = hand_generator_.Create(context_); if(CheckError("Create hand generaotr error!")) { return false; } //视角校正 status_ = depth_generator_.GetAlternativeViewPointCap().SetViewPoint(image_generator_); if(CheckError("Can't set the alternative view point on depth generator!")) { return false; } //设置与手势有关的回调函数 XnCallbackHandle gesture_cb; gesture_generator_.RegisterGestureCallbacks(CBGestureRecognized, CBGestureProgress, this, gesture_cb); //设置于手部有关的回调函数 XnCallbackHandle hands_cb; hand_generator_.RegisterHandCallbacks(HandCreate, HandUpdate, HandDestroy, this, hands_cb); return true; } bool COpenniHand::Start() { status_ = context_.StartGeneratingAll(); if(CheckError("Start generating error!")) { return false; } return true; } bool COpenniHand::UpdateData() { status_ = context_.WaitNoneUpdateAll(); if(CheckError("Update date error!")) { return false; } //获取数据 image_generator_.GetMetaData(image_metadata_); depth_generator_.GetMetaData(depth_metadata_); return true; } ImageGenerator &COpenniHand::getImageGenerator() { return image_generator_; } DepthGenerator &COpenniHand::getDepthGenerator() { return depth_generator_; } GestureGenerator &COpenniHand::getGestureGenerator() { return gesture_generator_; } HandsGenerator &COpenniHand::getHandGenerator() { return hand_generator_; } bool COpenniHand::CheckError(const char *error) { if(status_ != XN_STATUS_OK) { cerr << error << ": " << xnGetStatusString( status_ ) << endl; return true; } return false; } void COpenniHand::CBGestureRecognized(GestureGenerator &generator, const XnChar *strGesture, const XnPoint3D *pIDPosition, const XnPoint3D *pEndPosition, void *pCookie) { COpenniHand *openni = (COpenniHand*)pCookie; openni->hand_generator_.StartTracking(*pEndPosition); } void COpenniHand::CBGestureProgress(GestureGenerator &generator, const XnChar *strGesture, const XnPoint3D *pPosition, XnFloat fProgress, void *pCookie) { } void COpenniHand::HandCreate(HandsGenerator &rHands, XnUserID xUID, const XnPoint3D *pPosition, XnFloat fTime, void *pCookie) { COpenniHand *openni = (COpenniHand*)pCookie; XnPoint3D project_pos; openni->depth_generator_.ConvertRealWorldToProjective(1, pPosition, &project_pos); pair<XnUserID, XnPoint3D> hand_point_pair(xUID, XnPoint3D());//在进行pair类型的定义时,可以将第2个设置为空 hand_point_pair.second = project_pos; openni->hand_points_.insert(hand_point_pair);//将检测到的手部存入map类型的hand_points_中。 pair<XnUserID, vector<XnPoint3D>> hand_track_point(xUID, vector<XnPoint3D>()); hand_track_point.second.push_back(project_pos); openni->hands_track_points_.insert(hand_track_point); } void COpenniHand::HandUpdate(HandsGenerator &rHands, XnUserID xUID, const XnPoint3D *pPosition, XnFloat fTime, void *pCookie) { COpenniHand *openni = (COpenniHand*)pCookie; XnPoint3D project_pos; openni->depth_generator_.ConvertRealWorldToProjective(1, pPosition, &project_pos); openni->hand_points_.find(xUID)->second = project_pos; openni->hands_track_points_.find(xUID)->second.push_back(project_pos); } void COpenniHand::HandDestroy(HandsGenerator &rHands, XnUserID xUID, XnFloat fTime, void *pCookie) { COpenniHand *openni = (COpenniHand*)pCookie; openni->hand_points_.erase(openni->hand_points_.find(xUID)); openni->hands_track_points_.erase(openni->hands_track_points_.find(xUID )); }
ckinectopencv.h:
#ifndef CKINECTOPENCV_H #define CKINECTOPENCV_H #include <opencv2/core/core.hpp> #include "copennihand.h" using namespace cv; class CKinectOpenCV { public: CKinectOpenCV(); ~CKinectOpenCV(); void GetAllInformation(); //在返回有用信息前调用该函数,因为openni的数据在不断更新,信息的处理最好放在一个函数中 Mat GetColorImage() ; Mat GetDepthImage() ; std::map<XnUserID, XnPoint3D> GetHandPoints(); private: COpenniHand openni_hand_; std::map<XnUserID, XnPoint3D> hand_points_; //为了存储不同手的实时点而设置的 Mat color_image_; //颜色图像 Mat depth_image_; //深度图像 }; #endif // CKINECTOPENCV_H
ckinectopencv.cpp:
#include "ckinectopencv.h" #include <opencv2/imgproc/imgproc.hpp> #include <map> using namespace cv; using namespace std; #define DEPTH_SCALE_FACTOR 255./4096. CKinectOpenCV::CKinectOpenCV() { /*初始化openni对应的设备*/ CV_Assert(openni_hand_.Initial()); /*启动openni对应的设备*/ CV_Assert(openni_hand_.Start()); } CKinectOpenCV::~CKinectOpenCV() { } void CKinectOpenCV::GetAllInformation() { CV_Assert(openni_hand_.UpdateData()); /*获取色彩图像*/ Mat color_image_src(openni_hand_.image_metadata_.YRes(), openni_hand_.image_metadata_.XRes(), CV_8UC3, (char *)openni_hand_.image_metadata_.Data()); cvtColor(color_image_src, color_image_, CV_RGB2BGR); /*获取深度图像*/ Mat depth_image_src(openni_hand_.depth_metadata_.YRes(), openni_hand_.depth_metadata_.XRes(), CV_16UC1, (char *)openni_hand_.depth_metadata_.Data());//因为kinect获取到的深度图像实际上是无符号的16位数据 depth_image_src.convertTo(depth_image_, CV_8U, DEPTH_SCALE_FACTOR); hand_points_ = openni_hand_.hand_points_; //返回手部点的位置 return; } Mat CKinectOpenCV::GetColorImage() { return color_image_; } Mat CKinectOpenCV::GetDepthImage() { return depth_image_; } std::map<XnUserID, XnPoint3D> CKinectOpenCV::GetHandPoints() { return hand_points_; }
ckinecthandsegment.h:
#ifndef KINECTHAND_H #define KINECTHAND_H #include "ckinectopencv.h" using namespace cv; #define MAX_HANDS_COLOR 10 #define MAX_HANDS_NUMBER 10 class CKinectHandSegment { public: CKinectHandSegment(); ~CKinectHandSegment(); void Initial(); void StartKinectHand(); //启动kinect手部设备驱动 Mat GetColorImageWithHandsPoint(); Mat GetHandSegmentImage(); Mat GetHandHandlingImage(); Mat GetColorImage(); Mat GetDepthImage(); private: CKinectOpenCV kinect_opencv_; vector<Scalar> hand_center_color_array_;//采用默认的10种颜色 std::map<XnUserID, XnPoint3D> hand_points_; vector<unsigned int> hand_depth_; vector<Rect> hands_roi_; bool hand_segment_flag_; Mat color_image_with_handspoint_; //带有手部中心位置的色彩图 Mat color_image_; //色彩图 Mat depth_image_; Mat hand_segment_image_; Mat hand_handling_image_; Mat hand_segment_mask_; }; #endif // KINECTHAND_H
ckinecthandsegment.cpp:
#include <opencv2/core/core.hpp> #include <opencv2/imgproc/imgproc.hpp> #include "ckinecthandsegment.h" #include "copennihand.h" #include "ckinectopencv.h" using namespace cv; using namespace std; #define DEPTH_SCALE_FACTOR 255./4096. #define ROI_HAND_WIDTH 140 #define ROI_HAND_HEIGHT 140 #define MEDIAN_BLUR_K 5 #define XRES 640 #define YRES 480 #define DEPTH_SEGMENT_THRESH 5 #define HAND_LIKELY_AREA 2000 CKinectHandSegment::CKinectHandSegment() { } CKinectHandSegment::~CKinectHandSegment() { } void CKinectHandSegment::Initial() { color_image_with_handspoint_ = kinect_opencv_.GetColorImage(); depth_image_ = kinect_opencv_.GetDepthImage(); { hand_center_color_array_.push_back(Scalar(255, 0, 0)); hand_center_color_array_.push_back(Scalar(0, 255, 0)); hand_center_color_array_.push_back(Scalar(0, 0, 255)); hand_center_color_array_.push_back(Scalar(255, 0, 255)); hand_center_color_array_.push_back(Scalar(255, 255, 0)); hand_center_color_array_.push_back(Scalar(0, 255, 255)); hand_center_color_array_.push_back(Scalar(128, 255, 0)); hand_center_color_array_.push_back(Scalar(0, 128, 255)); hand_center_color_array_.push_back(Scalar(255, 0, 128)); hand_center_color_array_.push_back(Scalar(255, 128, 255)); } vector<unsigned int> hand_depth_temp(MAX_HANDS_NUMBER, 0); hand_depth_ = hand_depth_temp; vector<Rect> hands_roi_temp(MAX_HANDS_NUMBER, Rect(XRES/2, YRES/2, ROI_HAND_WIDTH, ROI_HAND_HEIGHT)); hands_roi_ = hands_roi_temp; } void CKinectHandSegment::StartKinectHand() { kinect_opencv_.GetAllInformation(); } Mat CKinectHandSegment::GetColorImage() { return kinect_opencv_.GetColorImage(); } Mat CKinectHandSegment::GetDepthImage() { return kinect_opencv_.GetDepthImage(); } /*该函数只是在Kinect获取的色彩图片上将手的中心点位置画出来而已,图片的其它地方不变*/ Mat CKinectHandSegment::GetColorImageWithHandsPoint() { color_image_with_handspoint_ = kinect_opencv_.GetColorImage(); hand_points_ = kinect_opencv_.GetHandPoints(); for(auto itUser = hand_points_.cbegin(); itUser != hand_points_.cend(); ++itUser) { circle(color_image_with_handspoint_, Point(itUser->second.X, itUser->second.Y), 5, hand_center_color_array_.at(itUser->first % hand_center_color_array_.size()), 3, 8); } return color_image_with_handspoint_; } Mat CKinectHandSegment::GetHandSegmentImage() { hand_segment_flag_ = false; color_image_ = kinect_opencv_.GetColorImage(); depth_image_ = kinect_opencv_.GetDepthImage(); hand_points_ = kinect_opencv_.GetHandPoints(); hand_segment_mask_ = Mat::zeros(color_image_.size(), CV_8UC1); // 因为zeros是一个静态函数,所以不能直接用具体的对象去调用,而需要用类来调用 for(auto itUser = hand_points_.cbegin(); itUser != hand_points_.cend(); ++itUser) { /*设置不同手部的深度*/ hand_depth_.at(itUser->first % MAX_HANDS_COLOR) = (unsigned int)(itUser->second.Z* DEPTH_SCALE_FACTOR);//itUser->first会导致程序出现bug /*设置不同手部的不同感兴趣区域*/ hands_roi_.at(itUser->first % MAX_HANDS_NUMBER) = Rect(itUser->second.X - ROI_HAND_WIDTH/2, itUser->second.Y - ROI_HAND_HEIGHT/2, ROI_HAND_WIDTH, ROI_HAND_HEIGHT); hands_roi_.at(itUser->first % MAX_HANDS_NUMBER).x = itUser->second.X - ROI_HAND_WIDTH/2; hands_roi_.at(itUser->first % MAX_HANDS_NUMBER).y = itUser->second.Y - ROI_HAND_HEIGHT/2; hands_roi_.at(itUser->first % MAX_HANDS_NUMBER).width = ROI_HAND_WIDTH; hands_roi_.at(itUser->first % MAX_HANDS_NUMBER).height = ROI_HAND_HEIGHT; if(hands_roi_.at(itUser->first % MAX_HANDS_NUMBER).x <= 0) hands_roi_.at(itUser->first % MAX_HANDS_NUMBER).x = 0; if(hands_roi_.at(itUser->first % MAX_HANDS_NUMBER).x > XRES) hands_roi_.at(itUser->first % MAX_HANDS_NUMBER).x = XRES; if(hands_roi_.at(itUser->first % MAX_HANDS_NUMBER).y <= 0) hands_roi_.at(itUser->first % MAX_HANDS_NUMBER).y = 0; if(hands_roi_.at(itUser->first % MAX_HANDS_NUMBER).y > YRES) hands_roi_.at(itUser->first % MAX_HANDS_NUMBER).y = YRES; } //取出手的mask部分,不管原图像时多少通道的,mask矩阵声明为单通道就ok for(auto itUser = hand_points_.cbegin(); itUser != hand_points_.cend(); ++itUser) { for(int i = hands_roi_.at(itUser->first % MAX_HANDS_NUMBER).x; i < min(hands_roi_.at(itUser->first % MAX_HANDS_NUMBER).x+hands_roi_.at(itUser->first % MAX_HANDS_NUMBER).width, XRES); i++) for(int j = hands_roi_.at(itUser->first % MAX_HANDS_NUMBER).y; j < min(hands_roi_.at(itUser->first % MAX_HANDS_NUMBER).y+hands_roi_.at(itUser->first % MAX_HANDS_NUMBER).height, YRES); j++) { hand_segment_mask_.at<unsigned char>(j, i) = ((hand_depth_.at(itUser->first % MAX_HANDS_NUMBER)-DEPTH_SEGMENT_THRESH) < depth_image_.at<unsigned char>(j, i)) & ((hand_depth_.at(itUser->first % MAX_HANDS_NUMBER)+DEPTH_SEGMENT_THRESH) > depth_image_.at<unsigned char>(j,i)); hand_segment_mask_.at<unsigned char>(j, i) = 255*hand_segment_mask_.at<unsigned char>(j, i); } } medianBlur(hand_segment_mask_, hand_segment_mask_, MEDIAN_BLUR_K); hand_segment_image_.convertTo(hand_segment_image_, CV_8UC3, 0, 0 ); // 需要清零 color_image_.copyTo(hand_segment_image_, hand_segment_mask_); hand_segment_flag_ = true; //返回之前将分割标志置位为1,表示已经完成分割函数 return hand_segment_image_; } Mat CKinectHandSegment::GetHandHandlingImage() { /*对mask图像进行轮廓提取,并在手势识别图像中画出来*/ std::vector< std::vector<Point> > contours; CV_Assert(hand_segment_flag_); // 因为后面要用到分割函数的mask图,所以这里先要保证调用过分割函数 findContours(hand_segment_mask_, contours, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);//找出mask图像的轮廓 hand_handling_image_ = Mat::zeros(color_image_.rows, color_image_.cols, CV_8UC3); for(int i = 0; i < contours.size(); i++) { //只有在检测到轮廓时才会去求它的多边形,凸包集,凹陷集 /*找出轮廓图像多边形拟合曲线*/ Mat contour_mat = Mat(contours[i]); if(contourArea(contour_mat) > HAND_LIKELY_AREA) { //比较有可能像手的区域 std::vector<Point> approx_poly_curve; approxPolyDP(contour_mat, approx_poly_curve, 10, true);//找出轮廓的多边形拟合曲线 std::vector< std::vector<Point> > approx_poly_curve_debug; approx_poly_curve_debug.push_back(approx_poly_curve); drawContours(hand_handling_image_, contours, i, Scalar(255, 0, 0), 1, 8); //画出轮廓 // drawContours(hand_handling_image_, approx_poly_curve_debug, 0, Scalar(256, 128, 128), 1, 8); //画出多边形拟合曲线 /*对求出的多边形拟合曲线求出其凸包集*/ vector<int> hull; convexHull(Mat(approx_poly_curve), hull, true); for(int i = 0; i < hull.size(); i++) { circle(hand_handling_image_, approx_poly_curve[hull[i]], 2, Scalar(0, 255, 0), 2, 8); } /*对求出的多边形拟合曲线求出凹陷集*/ std::vector<Vec4i> convexity_defects; if(Mat(approx_poly_curve).checkVector(2, CV_32S) > 3) convexityDefects(approx_poly_curve, Mat(hull), convexity_defects); for(int i = 0; i < convexity_defects.size(); i++) { circle(hand_handling_image_, approx_poly_curve[convexity_defects[i][2]] , 2, Scalar(0, 0, 255), 2, 8); } } } /**画出手势的中心点**/ for(auto itUser = hand_points_.cbegin(); itUser != hand_points_.cend(); ++itUser) { circle(hand_handling_image_, Point(itUser->second.X, itUser->second.Y), 3, Scalar(0, 255, 255), 3, 8); } return hand_handling_image_; }
main.cpp:
#include <opencv2/core/core.hpp> #include <opencv2/highgui/highgui.hpp> #include "ckinectopencv.h" #include "ckinecthandsegment.h" using namespace std; using namespace cv; int main() { CKinectHandSegment kinect_hand_segment; Mat color_image ; Mat depth_image ; Mat hand_segment; Mat hand_handling_image; kinect_hand_segment.Initial(); while(1) { kinect_hand_segment.StartKinectHand(); color_image = kinect_hand_segment.GetColorImageWithHandsPoint(); hand_segment = kinect_hand_segment.GetHandSegmentImage(); hand_handling_image = kinect_hand_segment.GetHandHandlingImage(); depth_image = kinect_hand_segment.GetDepthImage(); imshow("color_image", color_image); imshow("depth_image", depth_image); imshow("hand_segment", hand_segment); imshow("hand_handling", hand_handling_image); waitKey(30); } return 0; }
实验总结:把这些基本功能类设计好了后,就可以更方面测试我后面的手势识别算法了,加油!
参考资料:
Kinect+OpenNI学习笔记之3(获取kinect的数据并在Qt中显示的类的设计)
Kinect+OpenNI学习笔记之11(OpenNI驱动kinect手势相关的类的设计)
Kinect+OpenNI学习笔记之12(简单手势所表示的数字的识别)