VQGAN-CLIP: Open Domain Image Generation and Editing with Natural Language Guidance
Abstract
Generating and editing images from open domain text prompts is a challenging task that heretofore has required expensive and specially trained models. We demonstrate a novel methodology for both tasks which is capable of producing images of high visual quality from text prompts of significant semantic complexity without any training by using a <PRE_TAG>multimodal encoder</POST_TAG> to guide image generations. We demonstrate on a variety of tasks how using <PRE_TAG>CLIP</POST_TAG> [37] to guide <PRE_TAG>VQGAN</POST_TAG> [11] produces higher visual quality outputs than prior, less flexible approaches like <PRE_TAG>DALL-E</POST_TAG> [38], <PRE_TAG>GLIDE</POST_TAG> [33] and <PRE_TAG>Open-Edit</POST_TAG> [24], despite not being trained for the tasks presented. Our code is available in a public repository.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 2
Collections including this paper 0
No Collection including this paper