Flexible Style Image Super-Resolution using Conditional Objective

13 Jan 2022  ·  Seung Ho Park, Young Su Moon, Nam Ik Cho ·

Recent studies have significantly enhanced the performance of single-image super-resolution (SR) using convolutional neural networks (CNNs). While there can be many high-resolution (HR) solutions for a given input, most existing CNN-based methods do not explore alternative solutions during the inference. A typical approach to obtaining alternative SR results is to train multiple SR models with different loss weightings and exploit the combination of these models. Instead of using multiple models, we present a more efficient method to train a single adjustable SR model on various combinations of losses by taking advantage of multi-task learning. Specifically, we optimize an SR model with a conditional objective during training, where the objective is a weighted sum of multiple perceptual losses at different feature levels. The weights vary according to given conditions, and the set of weights is defined as a style controller. Also, we present an architecture appropriate for this training scheme, which is the Residual-in-Residual Dense Block equipped with spatial feature transformation layers. At the inference phase, our trained model can generate locally different outputs conditioned on the style control map. Extensive experiments show that the proposed SR model produces various desirable reconstructions without artifacts and yields comparable quantitative performance to state-of-the-art SR methods.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Super-Resolution BSD100 - 4x upscaling FxSR-PD t=0.8 PSNR 24.77 # 60
SSIM 0.6817 # 49
LPIPS 0.1572 # 3
Image Super-Resolution BSD100 - 4x upscaling FxSR-PD t=0.0 PSNR 26.38 # 52
SSIM 0.738 # 26
LPIPS 0.3433 # 5
Image Super-Resolution BSD100 - 8x upscaling FxSR-PD t=0.0 PSNR 23.6 # 5
SSIM 0.5728 # 5
LRPSNR 47.12 # 1
NIQE 5.49 # 2
LPIPS 0.5079 # 2
DISTS 0.2753 # 1
Image Super-Resolution BSD100 - 8x upscaling FxSR-PD t=0.8 PSNR 21.93 # 6
SSIM 0.5039 # 6
LRPSNR 42.41 # 2
NIQE 4.58 # 1
LPIPS 0.3129 # 1
DISTS 0.1972 # 2
Image Super-Resolution DIV2K val - 4x upscaling FxSR-PD t=0.8 PSNR 27.51 # 10
SSIM 0.789 # 8
LPIPS 0.1028 # 2
LRPSNR 50.54 # 4
NIQE 2.81 # 1
DISTS 0.0513 # 2
Image Super-Resolution DIV2K val - 4x upscaling FxSR-PD t=0.0 PSNR 29.24 # 3
SSIM 0.8383 # 4
LPIPS 0.239 # 7
LRPSNR 53.3 # 1
NIQE 4.11 # 3
DISTS 0.1169 # 3
Image Super-Resolution DIV2K val - 8x upscaling FxSR-PD t=0.8 PSNR 23.56 # 2
SSIM 0.6241 # 2
LRPSNR 42.66 # 2
LPIPS 0.2403 # 1
NIQE 3.61 # 1
DISTS 0.119 # 1
Image Super-Resolution DIV2K val - 8x upscaling FxSR-PD t=0.0 PSNR 25.6 # 1
SSIM 0.6989 # 1
LRPSNR 46.96 # 1
LPIPS 0.3857 # 2
NIQE 4.41 # 2
DISTS 0.1953 # 2
Image Super-Resolution General100 - 4x upscaling FxSR-PD t=0.0 PSNR 29.94 # 1
SSIM 0.8629 # 1
LRPSNR 52.22 # 1
LPIPS 0.1519 # 3
NIQE 6.05 # 2
DISTS 0.1205 # 3
Image Super-Resolution General100 - 4x upscaling FxSR-PD t=0.8 PSNR 28.44 # 3
SSIM 0.8229 # 3
LRPSNR 49.82 # 2
LPIPS 0.0784 # 2
NIQE 4.54 # 1
DISTS 0.0831 # 2
Image Super-Resolution General100 - 8x upscaling FxSR-PD t=0.8 PSNR 24 # 2
SSIM 0.6534 # 2
LRPSNR 41.36 # 2
LPIPS 0.2058 # 1
DISTS 0.1716 # 1
NIQE 5.46 # 1
Image Super-Resolution General100 - 8x upscaling FxSR-PD t=0.0 PSNR 25.42 # 1
SSIM 0.7097 # 1
LRPSNR 44.28 # 1
LPIPS 0.2924 # 2
DISTS 0.2134 # 2
NIQE 6.09 # 2

Methods