Papers
arxiv:2211.17235

NeRFInvertor: High Fidelity NeRF-GAN Inversion for Single-shot Real Image Animation

Published on Nov 30, 2022
Authors:
,
,
,
,
,

Abstract

Nerf-based Generative models have shown impressive capacity in generating high-quality images with consistent 3D geometry. Despite successful synthesis of fake identity images randomly sampled from <PRE_TAG>latent space</POST_TAG>, adopting these models for generating face images of real subjects is still a challenging task due to its so-called <PRE_TAG>inversion issue</POST_TAG>. In this paper, we propose a universal method to surgically fine-tune these <PRE_TAG>NeRF-GAN models</POST_TAG> in order to achieve <PRE_TAG>high-fidelity animation</POST_TAG> of real subjects only by a single image. Given the optimized <PRE_TAG>latent code</POST_TAG> for an <PRE_TAG>out-of-domain real image</POST_TAG>, we employ 2D loss functions on the rendered image to reduce the <PRE_TAG>identity gap</POST_TAG>. Furthermore, our method leverages <PRE_TAG>explicit and implicit 3D regularizations</POST_TAG> using the in-domain neighborhood samples around the optimized <PRE_TAG>latent code</POST_TAG> to remove geometrical and visual artifacts. Our experiments confirm the effectiveness of our method in realistic, high-fidelity, and 3D consistent animation of real faces on multiple <PRE_TAG>NeRF-GAN models</POST_TAG> across different datasets.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2211.17235 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2211.17235 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2211.17235 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.