Published October 15, 2020 | Version v1
Conference paper Open

Quantized Warping and Residual Temporal Integration for Video Super-Resolution on Fast Motions

Description

In recent years, numerous deep learning approaches to video super resolution have been proposed, increasing the resolution of one frame using information found in neighboring frames. Such methods either warp frames into alignment using optical flow, or else forgo warping and use optical flow as an additional network input. In this work we point out the disadvantages inherent in these two approaches and propose one that inherits the best features of both, warping with the integer part of the flow and using the fractional part as network input. Moreover, an iterative residual super-resolution approach is proposed to incrementally improve quality as more neighboring frames are provided. Incorporating the above in a recurrent architecture, we train, evaluate and compare the proposed network to the SotA, and note its superior performance in faster motion sequences.

Files

W55P084.pdf

Files (5.0 MB)

Name Size Download all
md5:a0085b0bb2cfd1b44fdc3c396e199d00
5.0 MB Preview Download

Additional details

Funding

ANITA – Advanced tools for fighting oNline Illegal TrAfficking 787061
European Commission