Papers
arxiv:2506.20652

EditP23: 3D Editing via Propagation of Image Prompts to Multi-View

Published on Jun 25, 2025
Authors:
,

Abstract

EditP23 is a mask-free 3D editing method that uses a pair of images to guide edits in the latent space of a pre-trained multi-view diffusion model, ensuring 3D consistency and preserving object identity.

AI-generated summary

We present EditP23, a method for mask-free 3D editing that propagates 2D image edits to multi-view representations in a 3D-consistent manner. In contrast to traditional approaches that rely on text-based prompting or explicit spatial masks, EditP23 enables intuitive edits by conditioning on a pair of images: an original view and its user-edited counterpart. These image prompts are used to guide an edit-aware flow in the latent space of a pre-trained multi-view diffusion model, allowing the edit to be coherently propagated across views. Our method operates in a feed-forward manner, without optimization, and preserves the identity of the original object, in both structure and appearance. We demonstrate its effectiveness across a range of object categories and editing scenarios, achieving high fidelity to the source while requiring no manual masks.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2506.20652
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2506.20652 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2506.20652 in a dataset README.md to link it from this page.

Spaces citing this paper 1

Collections including this paper 1