很久没更新了,确实发现自己可能比较懒,另外就是可能自己奋斗了一段时间,觉得那些入门的其实没必要写的那么详细。
这次直接奉上以前写的小DEMO,当时初学花了一个星期自己慢慢做出来的,现在想来,还是初学的能力太浅,至于中间的比如通过深度图像获取人体的高度可宽度,将点数据写入文本进行存储!这些我觉得学好了语言基础,操作这些还是蛮简单的,就不一一赘述。另外,这个换衣系统其实很简单,比起三维的贴合简直弱爆了,也算是一个小游戏。供大家娱乐下,顺便也练练编程!
首先是素材的准备,准备好背景图和“衣服”,当然为了方便我也做了一个头遮挡下面部。
说下思路:方式将是将之前系列文章中提及的进行一个综合应用;采用彩色获取彩色图像,利用深度图获取人物,将彩色和深度图叠加做出实时滤镜,抠出人物,利用骨架图,获取人体关节部位的位置信息,并将衣物(素材)贴合在人体上,根据人离摄像头坐标系的远近,得到一定的比例,将素材图进行放大缩小旋转操作,这样就做出了“合身的衣服”!(相信大家看过前面的基础入门,很快就能看懂官方的SDK中的示例,做出这么一个demo还是很轻松的)以下是代码:
private void ChangeSize(Body body)
{
//依照人体身高设置服装的大小,依照不同的编号进行服装大小分类
if (Math.Abs(
GetJointPointScreen(body.Joints[JointType.Neck]).Y -
GetJointPointScreen(body.Joints[JointType.AnkleLeft]).Y) > Math.Abs(
GetJointPointScreen(body.Joints[JointType.Neck]).Y -
GetJointPointScreen(body.Joints[JointType.AnkleRight]).Y))
{
imageFronSize = (int)Math.Abs(
GetJointPointScreen(body.Joints[JointType.Neck]).Y -
GetJointPointScreen(body.Joints[JointType.AnkleLeft]).Y);
}
else
{
imageFronSize = (int)Math.Abs(
GetJointPointScreen(body.Joints[JointType.Neck]).Y -
GetJointPointScreen(body.Joints[JointType.AnkleRight]).Y);
}
if (imageFronSize > 20)
{
imageForn.Width = imageFronSize;
imageForn.Height = imageFronSize;
imageForn.Margin = new Thickness(GetJointPointScreen(body.Joints[JointType.Neck]).X - imageFronSize / 2, GetJointPointScreen(body.Joints[JointType.Neck]).Y, 0, 0);
headImage.Margin = new Thickness(GetJointPointScreen(body.Joints[JointType.Head]).X - 40, GetJointPointScreen(body.Joints[JointType.Head]).Y - 40, 0, 0);
}
}
以下是利用骨架进行简单的姿势判断以及部分抠图代码:
#region 骨骼数据源处理,进行姿势判断
using (bodyFrame = multiSourceFrame.BodyFrameReference.AcquireFrame())
{
if (bodyFrame != null)
{
bodyFrame.GetAndRefreshBodyData(this._Bodies);
sureBody = getSureBody(this._Bodies);
if (sureBody != null)//确保数据不为空,才能有对数据的判别
{
getSureGesture();
imageForn.Visibility = Visibility.Visible;
headImage.Visibility = Visibility.Visible;
ChangeSize(sureBody);
}
}
else
{
if (imageForn.Visibility == Visibility.Visible)
{
imageForn.Visibility = Visibility.Hidden;
}
if (headImage.Visibility==Visibility.Visible)
{
headImage.Visibility = Visibility.Hidden;
}
}
}
#endregion
#region 采用深度,红外,彩色来标定人物图像,实现抠图
if ((depthFrame == null) || (colorFrame == null) || (bodyIndexFrame == null))
{
return;
}
FrameDescription depthFrameDescription = depthFrame.FrameDescription;
depthWidth = depthFrameDescription.Width;
depthHeight = depthFrameDescription.Height;
using (KinectBuffer depthFrameData = depthFrame.LockImageBuffer())
{
this.coordinateMapper.MapColorFrameToDepthSpaceUsingIntPtr(
depthFrameData.UnderlyingBuffer,
depthFrameData.Size,
this.colorMappedToDepthPoints);
}
depthFrame.Dispose();
depthFrame = null;
this.bitmap.Lock();
isBitmapLocked = true;
colorFrame.CopyConvertedFrameDataToIntPtr(this.bitmap.BackBuffer, this.bitmapBackBufferSize, ColorImageFormat.Bgra);
colorFrame.Dispose();
colorFrame = null;
using (KinectBuffer bodyIndexData = bodyIndexFrame.LockImageBuffer())
{
unsafe
{
byte* bodyIndexDataPointer = (byte*)bodyIndexData.UnderlyingBuffer;
int colorMappedToDepthPointCount = this.colorMappedToDepthPoints.Length;
fixed (DepthSpacePoint* colorMappedToDepthPointsPointer = this.colorMappedToDepthPoints)
{
uint* bitmapPixelsPointer = (uint*)this.bitmap.BackBuffer;
for (int colorIndex = 0; colorIndex < colorMappedToDepthPointCount; ++colorIndex)
{
float colorMappedToDepthX = colorMappedToDepthPointsPointer[colorIndex].X;
float colorMappedToDepthY = colorMappedToDepthPointsPointer[colorIndex].Y;
if (!float.IsNegativeInfinity(colorMappedToDepthX) &&
!float.IsNegativeInfinity(colorMappedToDepthY))
{
int depthX = (int)(colorMappedToDepthX + 0.5f);
int depthY = (int)(colorMappedToDepthY + 0.5f);
if ((depthX >= 0) && (depthX < depthWidth) && (depthY >= 0) && (depthY < depthHeight))
{
int depthIndex = (depthY * depthWidth) + depthX;
if (bodyIndexDataPointer[depthIndex] != 0xff)
{
continue;
}
}
}
bitmapPixelsPointer[colorIndex] = 0;
}
}
this.bitmap.AddDirtyRect(new Int32Rect(0, 0, this.bitmap.PixelWidth, this.bitmap.PixelHeight));
}
}
}
finally
{
if (isBitmapLocked)
{
this.bitmap.Unlock();
}
if (depthFrame != null)
{
depthFrame.Dispose();
}
if (colorFrame != null)
{
colorFrame.Dispose();
}
if (bodyIndexFrame != null)
{
bodyIndexFrame.Dispose();
}
if (bodyFrame != null)
{
bodyFrame.Dispose();
}
}
#endregion
以下是效果图:
附:由于彩色和深度源之间的分辨率不同,以及采集的深度图会有毛边,精通算法的伙伴可对数据进行滤波处理,形成的效果会更好!
希望大家有什么想说的可以多多交流,学习kinect纯属兴趣爱好,弄懂了API之后,发现非接触式开发要想达到好的效果,这与算法的运用是密不可分的,由于其他因素,暂时无法专心于这件事上,不过有空闲的时间也会多学学。好了,相信能弄懂官方的SDK的也能很快做出这种小DEMO,资源给有需要的人,里面注释也算一看能懂,毕竟那是我也刚学C# 自己做着玩的!
点击打开链接