Enhancing Detail Preservation for Customized Text-to-Image Generation: A Regularization-Free Approach
Abstract
Recent <PRE_TAG><PRE_TAG>text-to-image generation</POST_TAG></POST_TAG> models have demonstrated impressive capability of generating text-aligned images with high fidelity. However, generating images of novel concept provided by the user input image is still a challenging task. To address this problem, researchers have been exploring various methods for customizing <PRE_TAG><PRE_TAG>pre-trained <PRE_TAG><PRE_TAG><PRE_TAG>text-to-image generation</POST_TAG></POST_TAG> models</POST_TAG></POST_TAG></POST_TAG>. Currently, most existing methods for customizing pre-trained text-to-image generation models involve the use of <PRE_TAG><PRE_TAG>regularization techniques</POST_TAG></POST_TAG> to prevent over-fitting. While regularization will ease the challenge of customization and leads to successful content creation with respect to text guidance, it may restrict the model capability, resulting in the loss of detailed information and inferior performance. In this work, we propose a novel framework for customized <PRE_TAG><PRE_TAG>text-to-image generation</POST_TAG></POST_TAG> without the use of regularization. Specifically, our proposed framework consists of an <PRE_TAG><PRE_TAG>encoder network</POST_TAG></POST_TAG> and a novel <PRE_TAG>sampling method</POST_TAG> which can tackle the over-fitting problem without the use of regularization. With the proposed framework, we are able to customize a large-scale <PRE_TAG><PRE_TAG>text-to-image generation</POST_TAG></POST_TAG> model within half a minute on single GPU, with only one image provided by the user. We demonstrate in experiments that our proposed framework outperforms existing methods, and preserves more fine-grained details.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper