Unsupervised Cross-Domain Image Generation
Abstract
We study the problem of transferring a sample in one domain to an analog sample in another domain. Given two related domains, S and T, we would like to learn a <PRE_TAG>generative function</POST_TAG> G that maps an input sample from S to the domain T, such that the output of a given function f, which accepts inputs in either domains, would remain unchanged. Other than the function f, the training data is unsupervised and consist of a set of samples from each domain. The Domain Transfer Network (DTN) we present employs a compound loss function that includes a <PRE_TAG><PRE_TAG>multiclass <PRE_TAG>GAN</POST_TAG> loss</POST_TAG></POST_TAG>, an <PRE_TAG>f-constancy component</POST_TAG>, and a regularizing component that encourages G to map samples from T to themselves. We apply our method to <PRE_TAG>visual domains</POST_TAG> including digits and <PRE_TAG>face images</POST_TAG> and demonstrate its ability to generate convincing novel images of previously unseen entities, while preserving their identity.
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper