Abstract
Alterbute presents a diffusion-based approach for editing object intrinsic attributes while preserving identity and context through relaxed training objectives and visual named entities for scalable supervision.
We introduce Alterbute, a diffusion-based method for editing an object's intrinsic attributes in an image. We allow changing color, texture, material, and even the shape of an object, while preserving its perceived identity and scene context. Existing approaches either rely on unsupervised priors that often fail to preserve identity or use overly restrictive supervision that prevents meaningful intrinsic variations. Our method relies on: (i) a relaxed training objective that allows the model to change both intrinsic and extrinsic attributes conditioned on an identity reference image, a textual prompt describing the target intrinsic attributes, and a background image and object mask defining the extrinsic context. At inference, we restrict extrinsic changes by reusing the original background and object mask, thereby ensuring that only the desired intrinsic attributes are altered; (ii) Visual Named Entities (VNEs) - fine-grained visual identity categories (e.g., ''Porsche 911 Carrera'') that group objects sharing identity-defining features while allowing variation in intrinsic attributes. We use a vision-language model to automatically extract VNE labels and intrinsic attribute descriptions from a large public image dataset, enabling scalable, identity-preserving supervision. Alterbute outperforms existing methods on identity-preserving object intrinsic attribute editing.
Community
[TL;DR] We present Alterbute, a diffusion-based method for editing an object’s intrinsic attributes — color, texture, material, and shape — while preserving its perceived identity and scene context.
Paper 📄: https://arxiv.org/pdf/2601.10714
Project page 🌐: https://talreiss.github.io/alterbute/
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Refa\c{c}ade: Editing Object with Given Reference Texture (2025)
- OpenSubject: Leveraging Video-Derived Identity and Diversity Priors for Subject-driven Image Generation and Manipulation (2025)
- LoVoRA: Text-guided and Mask-free Video Object Removal and Addition with Learnable Object-aware Localization (2025)
- LooseRoPE: Content-aware Attention Manipulation for Semantic Harmonization (2026)
- Match-and-Fuse: Consistent Generation from Unstructured Image Sets (2025)
- Geometry-Aware Scene-Consistent Image Generation (2025)
- Over++: Generative Video Compositing for Layer Interaction Effects (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper