Papers
arxiv:1611.02200

Unsupervised Cross-Domain Image Generation

Published on Nov 7, 2016
Authors:
,
,

Abstract

We study the problem of transferring a sample in one domain to an analog sample in another domain. Given two related domains, S and T, we would like to learn a <PRE_TAG>generative function</POST_TAG> G that maps an input sample from S to the domain T, such that the output of a given function f, which accepts inputs in either domains, would remain unchanged. Other than the function f, the training data is unsupervised and consist of a set of samples from each domain. The Domain Transfer Network (DTN) we present employs a compound loss function that includes a <PRE_TAG><PRE_TAG>multiclass <PRE_TAG>GAN</POST_TAG> loss</POST_TAG></POST_TAG>, an <PRE_TAG>f-constancy component</POST_TAG>, and a regularizing component that encourages G to map samples from T to themselves. We apply our method to <PRE_TAG>visual domains</POST_TAG> including digits and <PRE_TAG>face images</POST_TAG> and demonstrate its ability to generate convincing novel images of previously unseen entities, while preserving their identity.

Community

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/1611.02200 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/1611.02200 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.