Emergent Mind

Towards Context-Stable and Visual-Consistent Image Inpainting

(2312.04831)
Published Dec 8, 2023 in cs.CV

Abstract

Recent progress in inpainting increasingly relies on generative models, leveraging their strong generation capabilities for addressing large irregular masks. However, this enhanced generation often introduces context-instability, leading to arbitrary object generation within masked regions. This paper proposes a balanced solution, emphasizing the importance of unmasked regions in guiding inpainting while preserving generation capacity. Our approach, Aligned Stable Inpainting with UnKnown Areas Prior (ASUKA), employs a Masked Auto-Encoder (MAE) to produce reconstruction-based prior. Aligned with the powerful Stable Diffusion inpainting model (SD), ASUKA significantly improves context stability. ASUKA further adopts an inpainting-specialized decoder, highly reducing the color inconsistency issue of SD and thus ensuring more visual-consistent inpainting. We validate effectiveness of inpainting algorithms on benchmark dataset Places 2 and a collection of several existing datasets, dubbed MISATO, across diverse domains and masking scenarios. Results on these benchmark datasets confirm ASUKA's efficacy in both context-stability and visual-consistency compared to SD and other inpainting algorithms.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.

YouTube