View-consistent 3D Editing with Gaussian Splatting

ECCV 2024
Yuxuan Wang1, Xuanyu Yi1, Zike Wu1, Na Zhao2, Long Chen3, Hanwang Zhang1,4,
1Nanyang Technological University, 2Singapore University of Technology and Design, 3Hongkong University of Science and Technology, 4Skywork AI

VcEdit achieves high-quality 3D Gaussian Splatting editing through view-consistent design.

Abstract

The advent of 3D Gaussian Splatting (3DGS) has revolutionized 3D editing, offering efficient, high-fidelity rendering and enabling precise local manipulations. Currently, diffusion-based 2D editing models are harnessed to modify multi-view rendered images, which then guide the editing of 3DGS models. However, this approach faces a critical issue of multi-view inconsistency, where the guidance images exhibit significant discrepancies across views, leading to mode collapse and visual artifacts of 3DGS.

To this end, we introduce View-consistent Editing (VcEdit), a novel framework that seamlessly incorporates 3DGS into image editing processes, ensuring multi-view consistency in edited guidance images and effectively mitigating mode collapse issues. VcEdit employs two innovative consistency modules: the Cross-attention Consistency Module and the Editing Consistency Module, both designed to reduce inconsistencies in edited images. By incorporating these consistency modules into an iterative pattern, VcEdit proficiently resolves the issue of multi-view inconsistency, facilitating high-quality 3DGS editing across a diverse range of scenes.

Illustration of 3DGS Editing

Face Editing

With our consistency-ensuring design, VcEdit can achieve high-quality in the dedicate face editing.

Object Editing

Through ensuring multi-view consistency, VcEdit performs well in the challenging panoramic objects editing without the "Janus Problem".

Scene Editing

VcEdit also remains effective in changing the style of the whole scene without specifying the editing regions.

BibTeX

@article{wang2024view,
      title={View-Consistent 3D Editing with Gaussian Splatting},
      author={Wang, Yuxuan and Yi, Xuanyu and Wu, Zike and Zhao, Na and Chen, Long and Zhang, Hanwang},
      journal={arXiv preprint arXiv:2403.11868},
      year={2024}
  }