人体行为识别可以被直接建模为图像识别任务,我们可以借助于CNN模型来实现我们的需求,图像本质上来说是二维的矩阵数据,CNN神经网络模型非常适合用于处理和计算这种类型的数据,对于一维的数据,同样可以基于CNN模型来实现,同时也是可以基于机器学习模型来进行实现的。
今天找到一个很有意思的数据集——人体行为姿势数据集WISDM,这个数据集中一共有36个人,每个人都会有6种动作,如下所示:
{'Sitting':0,'Downstairs':1,'Standing':2,'Walking':3,'Upstairs':4,'Jogging':5}
为了方便转化计算,我对其构建了映射字典。
这个数据集是按照一定的采样频率对被测试人进行采样获取的数据。
首先我们来简单看下数据集样例,下面是前100条数据样本:
1,33,0.04,0.09,0.14,0.12,0.11,0.1,0.08,0.13,0.13,0.08,0.09,0.1,0.11,0.11,0.08,0.04,0.16,0.13,0.1,0.03,0.12,0.08,0.09,0.12,0.1,0.1,0.08,0.11,0.12,0.1,0,8.4,1.76,2075,293.94,1550,3.29,7.21,4,4.05,8.17,4.05,11.96,Jogging
2,33,0.12,0.12,0.06,0.07,0.11,0.1,0.11,0.09,0.12,0.1,0.12,0.11,0.07,0.1,0.13,0.13,0.06,0.11,0.1,0.04,0.11,0.11,0.11,0.09,0.12,0.1,0.11,0.