通过上篇文章,我们了解了在视频图像从kinect开发包传输到应用程序之前的一系列初始化工作,那么这篇文章主要来叙述,如何将一帧图像数据获取到,并显示出来的。
更新窗口是在Run函数消息处理中,当KinectSDK触发了m_hNextDepthFrameEvent之后,上层收到这个Event就调用Update函数去更新窗口了。关键代码如下图,Run函数的全部代码也可以从上一篇博文中找到:
Update函数:
void CDepthBasics::Update() { if (NULL == m_pNuiSensor) // 如果pNuiSensor为空,那说明设备还没有初始化 { return; } // 继续判断m_hNextDepthFrameEvent是否有消息,其实这个判断这里可以去掉,但是考虑鲁棒性,还是留着吧 if ( WAIT_OBJECT_0 == WaitForSingleObject(m_hNextDepthFrameEvent, 0) ) { ProcessDepth(); // 处理 } }
void CDepthBasics::ProcessDepth() { HRESULT hr; NUI_IMAGE_FRAME imageFrame; // 通过kinect对象,从m_pDepthStreamHandle中获取图像数据,还记得m_pDepthStreamHandle么,是在初始化kinect设备时创建的深度图流 // 在这里调用这个代码的意义是:将一帧深度图,保存在imageFrame中 hr = m_pNuiSensor->NuiImageStreamGetNextFrame(m_pDepthStreamHandle, 0, &imageFrame); if (FAILED(hr)) { return; } BOOL nearMode; INuiFrameTexture* pTexture; // Get the depth image pixel texture,通过imageFrame把数据转化成纹理 hr = m_pNuiSensor->NuiImageFrameGetDepthImagePixelFrameTexture( m_pDepthStreamHandle, &imageFrame, &nearMode, &pTexture); if (FAILED(hr)) { goto ReleaseFrame; } NUI_LOCKED_RECT LockedRect; // Lock the frame data so the Kinect knows not to modify it while we're reading it,锁定数据 pTexture->LockRect(0, &LockedRect, NULL, 0); // Make sure we've received valid data if (LockedRect.Pitch != 0) { // Get the min and max reliable depth for the current frame int minDepth = (nearMode ? NUI_IMAGE_DEPTH_MINIMUM_NEAR_MODE : NUI_IMAGE_DEPTH_MINIMUM) >> NUI_IMAGE_PLAYER_INDEX_SHIFT; int maxDepth = (nearMode ? NUI_IMAGE_DEPTH_MAXIMUM_NEAR_MODE : NUI_IMAGE_DEPTH_MAXIMUM) >> NUI_IMAGE_PLAYER_INDEX_SHIFT; // 将m_depthRGBX的首地址保存在rgbrun,方便赋值 BYTE * rgbrun = m_depthRGBX; const NUI_DEPTH_IMAGE_PIXEL * pBufferRun = reinterpret_cast<const NUI_DEPTH_IMAGE_PIXEL *>(LockedRect.pBits); // end pixel is start + width*height - 1 const NUI_DEPTH_IMAGE_PIXEL * pBufferEnd = pBufferRun + (cDepthWidth * cDepthHeight); // 对m_depthRGBX也就是rgbrun赋值 while ( pBufferRun < pBufferEnd ) { // discard the portion of the depth that contains only the player index USHORT depth = pBufferRun->depth; // To convert to a byte, we're discarding the most-significant // rather than least-significant bits. // We're preserving detail, although the intensity will "wrap." // Values outside the reliable depth range are mapped to 0 (black). // Note: Using conditionals in this loop could degrade performance. // Consider using a lookup table instead when writing production code. BYTE intensity = static_cast<BYTE>(depth >= minDepth && depth <= maxDepth ? depth % 256 : 0); // Write out blue byte *(rgbrun++) = intensity; // Write out green byte *(rgbrun++) = intensity; // Write out red byte *(rgbrun++) = intensity; // We're outputting BGR, the last byte in the 32 bits is unused so skip it // If we were outputting BGRA, we would write alpha here. ++rgbrun; // Increment our index into the Kinect's depth buffer ++pBufferRun; } // Draw the data with Direct2D 最后将m_depthRGBX保存的图片,显示在窗口上 m_pDrawDepth->Draw(m_depthRGBX, cDepthWidth * cDepthHeight * cBytesPerPixel); } // We're done with the texture so unlock it,解锁和释放纹理 pTexture->UnlockRect(0); pTexture->Release(); ReleaseFrame: // Release the frame 释放帧 m_pNuiSensor->NuiImageStreamReleaseFrame(m_pDepthStreamHandle, &imageFrame); }