Kinect体感机器人(二)—— 体感识别

Kinect体感机器人(二)—— 体感识别

By 马冬亮(凝霜  Loki)

一个人的战争(http://blog.csdn.net/MDL13412)

背景知识

        体感技术属于NUI(自然人机界面)的范畴,可以让用户通过肢体语言与周边设备或环境互动,其实现手段主要包括:惯性感测、光学感测以及惯性及光学联合感测。市场上比较成熟的产品主要有:微软的Kinect、索尼的PS Move、任天堂的Vii以及来自华硕的Xtion。由于没有华硕Xtion的实物,我不对其进行评测,下表是对其余三种体感设别的评测:

Kinect体感机器人(二)—— 体感识别_第1张图片

        通过上图的对比,来自微软的Kinect具有压倒性的优势,所以Kinect方案最终被我们采纳。

亲身体验过的成功应用

        首先是我们制作的体感机器人,实现了对人体动作的模仿,可以应用到灾后搜救领域;

        接下来是香港中文大学的“Improving Communication Ability of the Disabled -Chinese Sign Language Recognition and Translation System”;其实就是手语翻译;

        还有来自上海大学的3D影院,其通过Kinect追踪用户的头部,让画面主动适应用户。

操作系统的选择

        关于操作系统的选择肯定是Linux了,参加嵌入式的比赛用Windows没有好下场,笑:-)

        下表是对常见Linux发行版的评测:

Kinect体感机器人(二)—— 体感识别_第2张图片

        限于开发板的处理速度与图形性能,最终方案为Fedora 16发行版。

体感驱动库的选择

        体感驱动库我只找到了两个选择:OpenNI和Kinect SDK,后者只能用在Windows上,果断放弃,其评测如下表所示:

Kinect体感机器人(二)—— 体感识别_第3张图片

代码——初始化体感设备

// 初始化体感设备
    XnStatus result;
    xn::Context context;
    xn::ScriptNode scriptNode;
    xn::EnumerationErrors errors;
    
    // 使用XML文件配置OpenNI库
    result = context.InitFromXmlFile(CONFIG_XML_PATH, scriptNode, &errors);
    if (XN_STATUS_NO_NODE_PRESENT == result)
    {
        XnChar strError[1024];
        errors.ToString(strError, 1024);
        NsLog()->error(strError);
        return 1;
    }
    else if (!NsLib::CheckOpenNIError(result, "Open config XML fialed"))
        return 1;
    
    NsLib::TrackerViewer::createInstance(context, scriptNode);
    NsLib::TrackerViewer &trackerViewer = NsLib::TrackerViewer::getInstance();
    
    if (!trackerViewer.init())
        return 1;
    
    trackerViewer.run();
        上述代码中的TrackerViewer是使用OpenGL进行绘制的人体骨骼图像,整个程序的同步操作也在此处理,下面给出上述代码引用到的关键代码:

// 单例模式,只允许一个实例
TrackerViewer *TrackerViewer::pInstance_ = 0;

void TrackerViewer::createInstance(xn::Context &context,
                                   xn::ScriptNode &scriptNode)
{
    assert(!pInstance_);
    
    pInstance_ = new TrackerViewer(context, scriptNode);
}
// 初始化TrackerViewer
bool TrackerViewer::init()
{
    if (!initDepthGenerator())
        return false;
    
    if (!initUserGenerator())
        return false;
        
    inited_ = true;
    return true;
}

// 初始化深度传感器
bool TrackerViewer::initDepthGenerator()
{
    XnStatus result;
    
    result = Context.FindExistingNode(XN_NODE_TYPE_DEPTH, DepthGenerator);
    if (!CheckOpenNIError(result, 
                         "No depth generator found. Check your XML"))
        return false;  
    
    return true;
}

// 初始化骨骼识别引擎
bool TrackerViewer::initUserGenerator()
{
    XnStatus result;
    
    // DepthGenerator.GetMapOutputMode(ImageInfo);
    
    result = Context.FindExistingNode(XN_NODE_TYPE_USER, UserGenerator);
    if (!CheckOpenNIError(result, 
                         "Use mock user generator"))
    {
        result = UserGenerator.Create(Context);
        if (!CheckOpenNIError(result, 
                         "Create mock user generator failed"))
            return false;
    }
    
	result = UserGenerator.RegisterUserCallbacks(User_NewUser, User_LostUser, 
                                                  NULL, hUserCallbacks_);
    if (!CheckOpenNIError(result, "Register to user callbacks"))
        return false;
	result = UserGenerator.GetSkeletonCap().RegisterToCalibrationStart(
        UserCalibration_CalibrationStart, NULL, hCalibrationStart_);
    if (!CheckOpenNIError(result, "Register to calibration start"))
        return false;
	result = UserGenerator.GetSkeletonCap().RegisterToCalibrationComplete(
        UserCalibration_CalibrationComplete, NULL, hCalibrationComplete_);
    if (!CheckOpenNIError(result, "Register to calibration complete"))
        return false;
    
    if (UserGenerator.GetSkeletonCap().NeedPoseForCalibration())
    {
        NeedPose = true;
        
		if (!UserGenerator.IsCapabilitySupported(XN_CAPABILITY_POSE_DETECTION))
		{
            NsLog()->error("Pose required, but not supported");
			return false;
		}
        
        result = UserGenerator.GetPoseDetectionCap().RegisterToPoseDetected(
            UserPose_PoseDetected, NULL, hPoseDetected_);
        if (!CheckOpenNIError(result, "Register to Pose Detected"))
            return false;
        
        UserGenerator.GetSkeletonCap().GetCalibrationPose(StrPose);
    }

    UserGenerator.GetSkeletonCap().SetSkeletonProfile(XN_SKEL_PROFILE_ALL);
    result = UserGenerator.GetSkeletonCap().RegisterToCalibrationInProgress(
        MyCalibrationInProgress, NULL, hCalibrationInProgress_);
    if (!CheckOpenNIError(result, "Register to calibration in progress"))
        return false;
    result = UserGenerator.GetPoseDetectionCap().RegisterToPoseInProgress(
        MyPoseInProgress, NULL, hPoseInProgress_);
    if (!CheckOpenNIError(result, "Register to pose in progress"))
        return false;
    
    return true;
}

OpenNI用于用户追踪的回调函数
        OpenNI采用的是事件回调的方式通知用户进行操作,本人的回调函数命名基本上可以“望文生意”,如果还有疑问,请查阅OpenNI文档,代码如下:

//------------------------------------------------------------------------------
// OpenNI Callbacks
//------------------------------------------------------------------------------
void XN_CALLBACK_TYPE TrackerViewer::User_NewUser(xn::UserGenerator& generator, 
                                                  XnUserID nId, 
                                                  void* pCookie)
{
    std::cout << "New user: " << nId << std::endl;
    
	if (TrackerViewer::getInstance().NeedPose)
	{
        TrackerViewer::getInstance().UserGenerator
            .GetPoseDetectionCap().StartPoseDetection(
                TrackerViewer::getInstance().StrPose, nId);
	}
	else
	{
        TrackerViewer::getInstance().UserGenerator
            .GetSkeletonCap().RequestCalibration(nId, TRUE);
	}
}

void XN_CALLBACK_TYPE TrackerViewer::User_LostUser(xn::UserGenerator& generator, 
                                                   XnUserID nId, 
                                                   void* pCookie)
{
    std::cout << "Lost user: " << nId << std::endl;
}

void XN_CALLBACK_TYPE TrackerViewer::UserPose_PoseDetected(
                                    xn::PoseDetectionCapability& capability, 
                                    const XnChar* strPose, 
                                    XnUserID nId, 
                                    void* pCookie)
{
    std::cout << "Pose " << TrackerViewer::getInstance().StrPose
        << " detected for user " << nId << std::endl;

	TrackerViewer::getInstance().UserGenerator
        .GetPoseDetectionCap().StopPoseDetection(nId);
	TrackerViewer::getInstance().UserGenerator
        .GetSkeletonCap().RequestCalibration(nId, TRUE);
}

void XN_CALLBACK_TYPE TrackerViewer::UserCalibration_CalibrationStart(
                                    xn::SkeletonCapability& capability, 
                                    XnUserID nId, 
                                    void* pCookie)
{
    std::cout << "Calibration started for user " << nId << std::endl;
}

void XN_CALLBACK_TYPE TrackerViewer::UserCalibration_CalibrationComplete(
                                    xn::SkeletonCapability& capability, 
                                    XnUserID nId, 
                                    XnCalibrationStatus eStatus,
                                    void* pCookie)
{
	if (eStatus == XN_CALIBRATION_STATUS_OK)
	{
        std::cout << "Calibration complete, start tracking user " 
            << nId << std::endl;
		TrackerViewer::getInstance().UserGenerator
            .GetSkeletonCap().StartTracking(nId);
	}
	else
	{
        if (TrackerViewer::getInstance().NeedPose)
        {
            TrackerViewer::getInstance().UserGenerator
                .GetPoseDetectionCap().StartPoseDetection(
                    TrackerViewer::getInstance().StrPose, nId);
        }
        else
        {
            TrackerViewer::getInstance().UserGenerator
                .GetSkeletonCap().RequestCalibration(nId, TRUE);
        }
	}
}

追踪用户并显示骨骼图

// 开始追踪用户
void TrackerViewer::run()
{
    assert(inited_);
    
    XnStatus result;
    
    result = Context.StartGeneratingAll();
    if (!CheckOpenNIError(result, "Start generating failed"))
        return;
    
    initOpenGL(&NsAppConfig().Argc, NsAppConfig().Argv);
    glutMainLoop();
}
// 初始化OpenGL
void TrackerViewer::initOpenGL(int *argc, char **argv)
{
	glutInit(argc, argv);
	glutInitDisplayMode(GLUT_RGB | GLUT_DOUBLE | GLUT_DEPTH);
	glutInitWindowSize(ImageInfo.nXRes, ImageInfo.nYRes);
	glutCreateWindow ("User Tracker Viewer");
	//glutFullScreen();
	glutSetCursor(GLUT_CURSOR_NONE);

	glutKeyboardFunc(glutKeyboard);
	glutDisplayFunc(glutDisplay);
	glutIdleFunc(glutIdle);

	glDisable(GL_DEPTH_TEST);
	glEnable(GL_TEXTURE_2D);

	glEnableClientState(GL_VERTEX_ARRAY);
	glDisableClientState(GL_COLOR_ARRAY);
}
//------------------------------------------------------------------------------
// OpenGL Callbacks
//------------------------------------------------------------------------------
void TrackerViewer::glutDisplay()
{
    if (TrackerViewer::getInstance().SignalExitApp)
        exit(0);
    
	glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

	// Setup the OpenGL viewpoint
	glMatrixMode(GL_PROJECTION);
	glPushMatrix();
	glLoadIdentity();
    
    static TrackerViewer &trackerViewer = TrackerViewer::getInstance();

	xn::DepthMetaData depthMD;
	trackerViewer.DepthGenerator.GetMetaData(depthMD);
	glOrtho(0, depthMD.XRes(), depthMD.YRes(), 0, -1.0, 1.0);
    
    glDisable(GL_TEXTURE_2D);
    
    trackerViewer.Context.WaitOneUpdateAll(trackerViewer.UserGenerator);
    
    xn::SceneMetaData sceneMD;
    trackerViewer.DepthGenerator.GetMetaData(depthMD);
    trackerViewer.UserGenerator.GetUserPixels(0, sceneMD);

    DrawDepthMap(depthMD, sceneMD);
    
	glutSwapBuffers();
}

void TrackerViewer::glutIdle()
{
    if (TrackerViewer::getInstance().SignalExitApp)
        exit(0);
    
    glutPostRedisplay();
}

void TrackerViewer::glutKeyboard(unsigned char key, int x, int y)
{
	switch (key)
	{
	case 27:
		TrackerViewer::getInstance().SignalExitApp = true;
    default:
        break;
	}
}
        上述代码完成了GUI的逻辑操作,关键说明如下:

// 将Kinect采集到的深度图像映射到OpenGL使用的2D坐标系中
	xn::DepthMetaData depthMD;
	trackerViewer.DepthGenerator.GetMetaData(depthMD);
	glOrtho(0, depthMD.XRes(), depthMD.YRes(), 0, -1.0, 1.0);
    
    glDisable(GL_TEXTURE_2D);
// 等待Kinect更新
trackerViewer.Context.WaitOneUpdateAll(trackerViewer.UserGenerator);
// 获取Kinect采集到的深度图像和用户信息,为绘制骨骼点和计算关节角度做准备
    xn::SceneMetaData sceneMD;
    trackerViewer.DepthGenerator.GetMetaData(depthMD);
    trackerViewer.UserGenerator.GetUserPixels(0, sceneMD);
// 绘制人体骨骼图像并计算关节角度,详见“Kinect体感机器人(三)—— 空间向量法计算关节角度”
DrawDepthMap(depthMD, sceneMD);

OpenNI配置用XML
        本设计采用XML文件+API进行OpenNI的配置工作,用到的XML文件如下:

<OpenNI>
	<Licenses>
		<!-- Add application-specific licenses here 
		<License vendor="vendor" key="key"/>
		-->
	</Licenses>
	<Log writeToConsole="false" writeToFile="false">
		<!-- 0 - Verbose, 1 - Info, 2 - Warning, 3 - Error (default) -->
		<LogLevel value="3"/>
		<Masks>
			<Mask name="ALL" on="true"/>
		</Masks>
		<Dumps>
		</Dumps>
	</Log>
	<ProductionNodes>
		<!-- Set global mirror -->
		<GlobalMirror on="true"/>
		
		<!-- Create a depth node and give it a name alias (useful if referenced ahead in this script) -->
		<Node type="Depth" name="MyDepth">
			<Query>
				<!-- Uncomment to filter by vendor name, product name, etc.
				<Vendor>MyVendor inc.</Vendor>
				<Name>MyProduct</Name>
				<MinVersion>1.2.3.4</MinVersion>
				<Capabilities>
					<Capability>Cropping</Capability>
				</Capabilities>
				-->
			</Query>
			<Configuration>
				<MapOutputMode xRes="640" yRes="480" FPS="30"/> 

				<!-- Uncomment to override global mirror
				<Mirror on="false" /> 
				-->
			</Configuration>
		</Node>

	</ProductionNodes>
</OpenNI>

效果图

Kinect体感机器人(二)—— 体感识别_第4张图片

你可能感兴趣的:(xml,user,null,callback,generator,translation)