标签(空格分隔): ActionRecognition ReadingNote
project page :http://www.di.ens.fr/willow/research/ltc
gayhub: https://github.com/gulvarol/ltc
现有方法一般采用CNN来获取动作表达,但是一般来说从较少的视频帧中,很难完整的对一个动作建模。
本文工作:
实验结果:UCF101 (92.7%) and HMDB51 (67.2%)。
The network has 5 space-time convolutional layers with 64, 128, 256, 256 and 256 filter response maps, followed by 3 fully connected layers of sizes 2048, 2048 and number of classes.
卷积核3x3x3,每个卷积层都采用ReLU激活函数,以及max-pooling(除了第一层是2x2x1,其他都是2x2x2)。三个维度都padding一个像素,卷积stride 1 for conv,2 for pooling。前两个fc层采用dropout。fc层结束后都接上ReLU层,最后采用softmax层输出类别分数。
从C3D出发,首先对比16帧和60帧的输入。
系统分析输入信号的时空分辨率增长的影响(两个方面motion/appearance)。
16帧:112x112x16 crop,171x128像素
60帧:58x58x60 crop,89x67像素
60帧的网络,5个卷积层分别对应60、30、15、7、3帧输入。
16:16、8、4、2、1.
The space-time resolution for the outputs of the fifth convolutional layers is 3×3×1 and 1×1×3 for the 16f and 60f networks respectively.
For a systematic study of networks with different input resolutions we also evaluate the effect of increased temporal resolution t 2 f20; 40; 60; 80; 100g and varying spatial resolution of 58×58 , 71×71 pixels.
除了采用RGB输入,还采用了x和y方向的光流输入。
三种光流:MPEG flow、Farneback、Brox.
一些训练细节。
dataset:HMDB51(3.7k)/UCF101(9.5k)
SGD,mini-batch,对数似然。
16f network:30 clips/batch
60f network:15clips/batch
100f network:10clips/batch
初始学习率:
from scratch: 3×10−3 and 3×10−4
UCF101,减小两次,每次除以10。( 16f:迭代80k减小一次,再迭代45k减小一次,再20k多迭代就差不多了。)
HMDB51收敛快一些,60k时除以10就行了。在迭代10k多就差不多了。
对于60f network这些参数要double,100f要triple。
dropout:0.9
The momentum is set to 0.9
weight decay is initialized with 5×10−3 and reduced by a factor of 10−1 at every decrease of the learning rate.
data augmentation:
论文挺简单的……空了再写笔记吧。