注解:作者写了一系列博文,可以借鉴一下:
After playing with both the Microsoft Kinect SDK and the PrimeSense OpenNI SDK here are some of my thoughts,Note that the Microsoft’s SDK version is the Beta version, so things may change when the final one is released)
(2):数据获取借鉴的三篇文章:
http://viml.nchc.org.tw/blog/paper_info.php?CLASS_ID=1&SUB_ID=1&PAPER_ID=216
http://viml.nchc.org.tw/blog/paper_info.php?CLASS_ID=1&SUB_ID=1&PAPER_ID=217
还有一篇文章,比较详细:
(1):作者的例程:利用OpenNI读取Kinect...
#include <stdlib.h> #include <iostream> #include <string> #include <XnCppWrapper.h> using namespace std; void CheckOpenNIError( XnStatus eResult, string sStatus ) { if( eResult != XN_STATUS_OK ) cerr << sStatus << " Error: " << xnGetStatusString( eResult ) << endl; } int main( int argc, char** argv ) { XnStatus eResult = XN_STATUS_OK; // 2. initial context xn::Context mContext; eResult = mContext.Init(); CheckOpenNIError( eResult, "initialize context" ); // set map mode XnMapOutputMode mapMode; mapMode.nXRes = 640; mapMode.nYRes = 480; mapMode.nFPS = 30; // 3. create depth generator xn::DepthGenerator mDepthGenerator; eResult = mDepthGenerator.Create( mContext ); CheckOpenNIError( eResult, "Create depth generator" ); eResult = mDepthGenerator.SetMapOutputMode( mapMode ); // 4. start generate data eResult = mContext.StartGeneratingAll(); // 5. read data eResult = mContext.WaitAndUpdateAll(); if( eResult == XN_STATUS_OK ) { // 5. get the depth map const XnDepthPixel* pDepthMap = mDepthGenerator.GetDepthMap(); // 6. Do something with depth map } // 7. stop mContext.StopGeneratingAll(); mContext.Shutdown(); return 0; }
(2):合并深度和RGB图像:基于对昌邑程序的修改。
#include <stdlib.h> #include <iostream> #include <string> #include <XnCppWrapper.h> using namespace std; void CheckOpenNIError( XnStatus eResult, string sStatus ) { if( eResult != XN_STATUS_OK ) cerr << sStatus << " Error : " << xnGetStatusString( eResult ) << endl; } int main( int argc, char** argv ) { XnStatus eResult = XN_STATUS_OK; // 2. initial context xn::Context mContext; eResult = mContext.Init(); CheckOpenNIError( eResult, "initialize context" ); // 3. create depth generator xn::DepthGenerator mDepthGenerator; eResult = mDepthGenerator.Create( mContext ); CheckOpenNIError( eResult, "Create depth generator" ); // 4. create image generator xn::ImageGenerator mImageGenerator; eResult = mImageGenerator.Create( mContext ); CheckOpenNIError( eResult, "Create image generator" ); // 5. set map mode XnMapOutputMode mapMode; mapMode.nXRes = 640; mapMode.nYRes = 480; mapMode.nFPS = 30; eResult = mDepthGenerator.SetMapOutputMode( mapMode ); eResult = mImageGenerator.SetMapOutputMode( mapMode ); // 6. correct view port mDepthGenerator.GetAlternativeViewPointCap().SetViewPoint( mImageGenerator ); // 7. tart generate data eResult = mContext.StartGeneratingAll(); // 8. read data eResult = mContext.WaitNoneUpdateAll(); if( eResult == XN_STATUS_OK ) { // 9a. get the depth map const XnDepthPixel* pDepthMap = mDepthGenerator.GetDepthMap(); // 9b. get the image map const XnUInt8* pImageMap = mImageGenerator.GetImageMap(); } // 10. stop mContext.StopGeneratingAll(); mContext.Shutdown(); return 0; }
(3):建立3D点云:对上一个程序的修改
定义一个简单的结构体:SColorPoint3D:
struct SColorPoint3D { float X; float Y; float Z; float R; float G; float B; SColorPoint3D( XnPoint3D pos, XnRGB24Pixel color ) { X = pos.X; Y = pos.Y; Z = pos.Z; R = (float)color.nRed / 255; G = (float)color.nGreen / 255; B = (float)color.nBlue / 255; } };
六个点值:分別記錄這個點的位置、以及顏色;
建構子的部分:則是傳入 OpenNI 定義的結構的参数:代表位置的 XnPoint3D 以及代表 RGB 顏色的 XnRGB24Pixel。
把座標轉換的部分寫成一個函数 GeneratePointCloud(),其內容如下:
void GeneratePointCloud( xn::DepthGenerator& rDepthGen, const XnDepthPixel* pDepth, const XnRGB24Pixel* pImage, vector<SColorPoint3D>& vPointCloud ) { // 1. number of point is the number of 2D image pixel xn::DepthMetaData mDepthMD; rDepthGen.GetMetaData( mDepthMD ); unsigned int uPointNum = mDepthMD.FullXRes() * mDepthMD.FullYRes(); // 2. build the data structure for convert XnPoint3D* pDepthPointSet = new XnPoint3D[ uPointNum ]; unsigned int i, j, idxShift, idx; for( j = 0; j < mDepthMD.FullYRes(); ++j ) { idxShift = j * mDepthMD.FullXRes(); for( i = 0; i < mDepthMD.FullXRes(); ++i ) { idx = idxShift + i; pDepthPointSet[idx].X = i; pDepthPointSet[idx].Y = j; pDepthPointSet[idx].Z = pDepth[idx]; } } // 3. un-project points to real world XnPoint3D* p3DPointSet = new XnPoint3D[ uPointNum ]; rDepthGen.ConvertProjectiveToRealWorld( uPointNum, pDepthPointSet, p3DPointSet ); delete[] pDepthPointSet; // 4. build point cloud for( i = 0; i < uPointNum; ++ i ) { // skip the depth 0 points if( p3DPointSet[i].Z == 0 ) continue; vPointCloud.push_back( SColorPoint3D( p3DPointSet[i], pImage[i] ) ); } delete[] p3DPointSet; }
函数把 xn::DepthGenerator 以及讀到的深度影像和彩色影像傳進來,用來當作資料來源;
同時也傳入一個 vector<SColorPoint3D>,作為儲存轉換完成後的 3D 點位資料。
其中,深度影像的格式還是一樣用 XnDepthPixel 的 const 指標,不過在彩色影像的部分,Heresy 則是改用把 RGB 封包好的 XnRGB24Pixel,這樣可以減少一些索引值的計算;而因為這樣修改,之前讀取彩色影像的程式也要
修改為
constXnRGB24Pixel* pImageMap = mImageGenerator.GetRGB24ImageMap();
回到主程序的部分,本來讀取資料的程序是:
// 8. read data eResult = mContext.WaitNoneUpdateAll(); if( eResult == XN_STATUS_OK ) { // 9a. get the depth map const XnDepthPixel* pDepthMap = mDepthGenerator.GetDepthMap(); // 9b. get the image map const XnUInt8* pImageMap = mImageGenerator.GetImageMap(); }
前面也提過,Heresy 這邊不打算提及用 OpenGL 顯示的部分,所以這邊為了不停地更新資料,所以改用一個無窮迴圈的形式來不停地更新資料、並進行座標轉換;
而轉換後的結果,也很簡單地只輸出它的點的數目。
修改后:
// 8. read data vector<SColorPoint3D> vPointCloud; while( true ) { eResult = mContext.WaitNoneUpdateAll(); // 9a. get the depth map const XnDepthPixel* pDepthMap = mDepthGenerator.GetDepthMap(); // 9b. get the image map const XnRGB24Pixel* pImageMap = mImageGenerator.GetRGB24ImageMap(); // 10 generate point cloud vPointCloud.clear(); GeneratePointCloud( mDepthGenerator, pDepthMap, pImageMap, vPointCloud ); cout << "Point number: " << vPointCloud.size() << endl; }
如果是要用 OpenGL 畫出來的話,基本上就是不要使用無窮迴圈,而是在每次要畫之前,再去讀取 Kinect 的資料、並透過 GeneratePointCloud() 做轉換了∼而如果不打算重建多邊形、而是像 Heresy 直接一點一點畫出來的話,結果大概就會像上面的影片一樣了∼
若用 OpenGL 畫出來的話,基本上就是不要使用无穷循环,而是在每次要畫之前,再去讀取 Kinect 的資料、並透過 GeneratePointCloud() 做轉換∼而如果不打算重建多邊形、而是像 Heresy 直接一點一點畫出來的話,結果大概就會像上面的影片一樣∼