Papers
arxiv:2512.13678

Feedforward 3D Editing via Text-Steerable Image-to-3D

Published on Dec 15
· Submitted by
Jiacheng Liu
on Dec 17
Authors:
,
,
,

Abstract

Steer3D enables text-based editing of AI-generated 3D assets by adapting ControlNet for image-to-3D generation with flow-matching training and Direct Preference Optimization.

AI-generated summary

Recent progress in image-to-3D has opened up immense possibilities for design, AR/VR, and robotics. However, to use AI-generated 3D assets in real applications, a critical requirement is the capability to edit them easily. We present a feedforward method, Steer3D, to add text steerability to image-to-3D models, which enables editing of generated 3D assets with language. Our approach is inspired by ControlNet, which we adapt to image-to-3D generation to enable text steering directly in a forward pass. We build a scalable data engine for automatic data generation, and develop a two-stage training recipe based on flow-matching training and Direct Preference Optimization (DPO). Compared to competing methods, Steer3D more faithfully follows the language instruction and maintains better consistency with the original 3D asset, while being 2.4x to 28.5x faster. Steer3D demonstrates that it is possible to add a new modality (text) to steer the generation of pretrained image-to-3D generative models with 100k data. Project website: https://glab-caltech.github.io/steer3d/

Community

Very cool model that lets you edit 3D digital objects into whatever way you like, using natural language instructions!

Project Home: https://glab-caltech.github.io/steer3d/
Demo: https://glab-caltech.github.io/steer3d/#demo

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 2

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2512.13678 in a Space README.md to link it from this page.

Collections including this paper 1