Publications

Detailed Information

Toward Real-World Super-Resolution via Adaptive Downsampling Models

Cited 10 time in Web of Science Cited 14 time in Scopus
Authors

Son, Sanghyun; Kim, Jaeha; Lai, Wei-Sheng; Yang, Ming-Hsuan; Lee, Kyoung Mu

Issue Date
2022-11
Publisher
Institute of Electrical and Electronics Engineers
Citation
IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.44 No.11, pp.8657-8670
Abstract
Most image super-resolution (SR) methods are developed on synthetic low-resolution (LR) and high-resolution (HR) image pairs that are constructed by a predetermined operation, e.g., bicubic downsampling. As existing methods typically learn an inverse mapping of the specific function, they produce blurry results when applied to real-world images whose exact formulation is different and unknown. Therefore, several methods attempt to synthesize much more diverse LR samples or learn a realistic downsampling model. However, due to restrictive assumptions on the downsampling process, they are still biased and less generalizable. This study proposes a novel method to simulate an unknown downsampling process without imposing restrictive prior knowledge. We propose a generalizable low-frequency loss (LFL) in the adversarial training framework to imitate the distribution of target LR images without using any paired examples. Furthermore, we design an adaptive data loss (ADL) for the downsampler, which can be adaptively learned and updated from the data during the training loops. Extensive experiments validate that our downsampling model can facilitate existing SR methods to perform more accurate reconstructions on various synthetic and real-world examples than the conventional approaches.
ISSN
0162-8828
URI
https://hdl.handle.net/10371/188963
DOI
https://doi.org/10.1109/TPAMI.2021.3106790
Files in This Item:
There are no files associated with this item.
Appears in Collections:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share