视频插帧--Video Frame Interpolation via Adaptive Convolution

Video Frame Interpolation via Adaptive Convolution
CVPR2017
http://web.cecs.pdx.edu/~fliu/project/adaconv/

本文使用CNN网络完成 frame interpolation,这里我们将像素插值问题看作对相邻两帧中相应图像块的卷积,通过一个全卷积CNN网络来估计 spatially-adaptive convolutional kernel,这些核捕获运动信息和插值系数, capture both the motion and interpolation coefficients, and uses these kernels to directly convolve with input images to synthesize a middle video frame.,如下图所示:
视频插帧--Video Frame Interpolation via Adaptive Convolution_第1张图片

Given two video frames I1and I2, our method aims to interpolate a frame ˆI temporally in the middle of the two input frames

3 Video Frame Interpolation
传统的帧插值方法是 two-step approach: first estimates motion between two frames and then interpolates the pixel color based on the motion
但是光流的计算很容易不稳定
optical flow is not reliable due to occlusion, motion blur, and lack of texture

我们这里的策略是 Interpolation by convolution
视频插帧--Video Frame Interpolation via Adaptive Convolution_第2张图片

将 pixel interpolation 表示为 convolution 的优点:
1)将运动估计和像素合成变为一个步骤可以提高方法的鲁棒性 provides a more robust solution
2)卷积核对一些困难的情况提供了灵活性 the convolution kernel provides flexibility to account for and address difficult cases like occlusion
3)一旦得到卷积核,可以无缝接入advanced re-sampling techniques

3.1. Convolution kernel estimation 卷积核估计
这里我们使用一个 CNN 网络来 estimate a proper convolutional kernel to synthesize each output pixel in the interpolated images.
视频插帧--Video Frame Interpolation via Adaptive Convolution_第3张图片

In our implementation, the default receptive field size is 79 × 79 pixels. The convolution patch size is 41×41 and the kernel size is 41 × 82 as it is used to convolve with two patches

Loss function
这里我们分别设计了 color loss 和 gradient loss,最终的损失函数是 combine the above color and gradient loss as our final loss
视频插帧--Video Frame Interpolation via Adaptive Convolution_第4张图片

4 Experiments
Qualitative evaluation on blurry videos
视频插帧--Video Frame Interpolation via Adaptive Convolution_第5张图片

Evaluation on the Middlebury testing set
视频插帧--Video Frame Interpolation via Adaptive Convolution_第6张图片

Qualitative evaluation on video with abrupt brightness change
视频插帧--Video Frame Interpolation via Adaptive Convolution_第7张图片

Qualitative evaluation with respect to occlusion
视频插帧--Video Frame Interpolation via Adaptive Convolution_第8张图片

On a single Nvidia Titan X, this implementation takes about 2.8 seconds with 3.5 gigabytes of memory for a 640 × 480 image, and 9.1 seconds with 4.7 gigabytes for 1280×720, and 21.6 seconds with 6.8 gigabytes for 1920 × 1080.

你可能感兴趣的:(深度学习应用,CVPR2017)