Video Super-Resolution with Convolutional Neural Networks

Armin Kappeler, Seunghwan Yoo, Qiqin Dai, Aggelos K. Katsaggelos

Abstract:

Convolutional neural networks (CNN) are a special type of Deep Neural Networks (DNN). They have so far been successfully applied to image super-resolution (SR) as well as other image restoration tasks. In this paper, we consider the problem of video super-resolution. We propose a CNN that is trained on both the spatial and the temporal dimensions of videos to enhance their spatial resolution. Consecutive frames are motion compensated and used as input to a CNN that provides super-resolved video frames as output. We investigate different options of combining the video frames within one CNN architecture. While large image databases are available to train deep neural networks, it is more challenging to create a large video database of sufficient quality to train neural nets for video restoration. We show that by using images to pretrain our model, a relatively small video database is sufficient for the training of our model to achieve and even improve upon the current state-of-the-art. We compare our proposed approach to current video as well as image super-resolution algorithms.


Fig. 1: Comparison to the State-of-the-Art: SR frames from the Myanmar video compared to our method (VSRnet) for upscale factor 4.


Fig 2: Frames with motion blurred objects from the walk and foreman sequence, reconstructed with MC and AMC for upscale factor 3.

Citation:

A.Kappeler, S.Yoo, Q.Dai, A.K.Katsaggelos. Video Super-Resolution with Convolutional Neural Networks. Computational Imaging, IEEE Transactions on, 2.2 (2016): 109-122.

Source Code and Examples:

Due to the large data size we only provide a few samples. More video samples and the training databases can be provided on request.