Blind Image Super-Resolution with Spatially Variant Degradations

Abstract

Existing deep learning approaches to single image super-resolution have achieved impressive results but mostly assume a setting with fixed pairs of high resolution and low resolution images. However, to robustly address realistic upscaling scenarios where the relation between high resolution and low resolution images is unknown, blind image super-resolution is required. To this end, we propose a solution that relies on three components: First, we use a degradation aware SR network to synthesize the HR image given a low resolution image and the corresponding blur kernel. Second, we train a kernel discriminator to analyze the generated high resolution image in order to predict errors present due to providing an incorrect blur kernel to the generator. Finally, we present an optimization procedure that is able to recover both the degradation kernel and the high resolution image by minimizing the error predicted by our kernel discriminator. We also show how to extend our approach to spatially variant degradations that typically arise in visual effects pipelines when compositing content from different sources and how to enable both local and global user interaction in the upscaling process.

Publication
SIGGRAPH Asia

Overview

overview In blind super-resolution, the degradation kernel k applied on the high resolution image to obtain the low resolution image $I_l$ is unknown. Our pipeline is duplicated for two different kernels (a) and (b): the degradation-aware generator ($\mathcal{F}_g$) computes a high resolution output according to the provided blur kernel k. We note that a NN $\mathcal{F}_k$ is used to map the kernels to a low dimensional representation. The two kernels will result in different high resolution estimates. The kernel (a) farther from the unknown original degradation leads to more artifacts. To detect this, we propose a kernel discriminator network ($\mathcal{F}_d$) predicting the error due to using the incorrect kernel. By taking advantage of these two networks, we can express kernel estimation as finding the blur kernel resulting in the least amount of errors and artifacts in the predicted high resolution image (See text for details). Photo Credits: Pixabay/pexels.com.

Results (screenshots from the paper)

comparison