VoxHammer: Training-Free Precise and Coherent 3D Editing in Native 3D Space
Abstract
VoxHammer is a training-free method that performs precise and coherent 3D editing in latent space, ensuring consistency in preserved regions and high-quality overall results.
3D local editing of specified regions is crucial for game industry and robot interaction. Recent methods typically edit rendered multi-view images and then reconstruct 3D models, but they face challenges in precisely preserving unedited regions and overall coherence. Inspired by structured 3D generative models, we propose VoxHammer, a novel training-free approach that performs precise and coherent editing in 3D latent space. Given a 3D model, VoxHammer first predicts its inversion trajectory and obtains its inverted latents and key-value tokens at each timestep. Subsequently, in the denoising and editing phase, we replace the denoising features of preserved regions with the corresponding inverted latents and cached key-value tokens. By retaining these contextual features, this approach ensures consistent reconstruction of preserved areas and coherent integration of edited parts. To evaluate the consistency of preserved regions, we constructed Edit3D-Bench, a human-annotated dataset comprising hundreds of samples, each with carefully labeled 3D editing regions. Experiments demonstrate that VoxHammer significantly outperforms existing methods in terms of both 3D consistency of preserved regions and overall quality. Our method holds promise for synthesizing high-quality edited paired data, thereby laying the data foundation for in-context 3D generation. See our project page at https://huanngzh.github.io/VoxHammer-Page/.
Community
TL;DR: A training-free 3D editing approach that performs precise and coherent editing in native 3D latent space instead of multi-view space.
Project page: https://huanngzh.github.io/VoxHammer-Page/
Code: https://github.com/Nelipot-Lee/VoxHammer
Edit3D-Bench: https://github.com/Nelipot-Lee/VoxHammer/Edit3D-Bench
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- CoreEditor: Consistent 3D Editing via Correspondence-constrained Diffusion (2025)
- Follow-Your-Shape: Shape-Aware Image Editing via Trajectory-Guided Region Control (2025)
- Robust 3D-Masked Part-level Editing in 3D Gaussian Splatting with Regularized Score Distillation Sampling (2025)
- Mastering Regional 3DGS: Locating, Initializing, and Editing with Diverse 2D Priors (2025)
- DisCo3D: Distilling Multi-View Consistency for 3D Scene Editing (2025)
- CannyEdit: Selective Canny Control and Dual-Prompt Guidance for Training-Free Image Editing (2025)
- Stable Score Distillation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper