MetaF2N: Blind Image Super-Resolution by Learning Efficient Model Adaptation from Faces
Abstract
Due to their highly structured characteristics, faces are easier to recover than natural scenes for <PRE_TAG>blind image super-resolution</POST_TAG>. Therefore, we can extract the <PRE_TAG>degradation representation</POST_TAG> of an image from the low-quality and recovered face pairs. Using the <PRE_TAG>degradation representation</POST_TAG>, realistic <PRE_TAG>low-quality images</POST_TAG> can then be synthesized to fine-tune the <PRE_TAG>super-resolution model</POST_TAG> for the real-world low-quality image. However, such a procedure is time-consuming and laborious, and the gaps between recovered faces and the <PRE_TAG>ground-truths</POST_TAG> further increase the optimization uncertainty. To facilitate efficient model adaptation towards <PRE_TAG>image-specific degradations</POST_TAG>, we propose a method dubbed <PRE_TAG>MetaF2N</POST_TAG>, which leverages the contained Faces to fine-tune model parameters for adapting to the whole Natural image in a <PRE_TAG>Meta-learning framework</POST_TAG>. The degradation extraction and low-quality image synthesis steps are thus circumvented in our <PRE_TAG>MetaF2N</POST_TAG>, and it requires only one <PRE_TAG>fine-tuning</POST_TAG> step to get decent performance. Considering the gaps between the recovered faces and <PRE_TAG>ground-truths</POST_TAG>, we further deploy a <PRE_TAG>MaskNet</POST_TAG> for adaptively predicting loss weights at different positions to reduce the impact of low-confidence areas. To evaluate our proposed <PRE_TAG>MetaF2N</POST_TAG>, we have collected a real-world low-quality dataset with one or multiple faces in each image, and our <PRE_TAG>MetaF2N</POST_TAG> achieves superior performance on both synthetic and real-world datasets. Source code, pre-trained models, and collected datasets are available at https://github.com/yinzhicun/<PRE_TAG>MetaF2N</POST_TAG>.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper