Emergent Mind

Abstract

Reconstructing high-quality 3D objects from sparse, partial observations from a single view is of crucial importance for various applications in computer vision, robotics, and graphics. While recent neural implicit modeling methods show promising results on synthetic or dense data, they perform poorly on sparse and noisy real-world data. We discover that the limitations of a popular neural implicit model are due to lack of robust shape priors and lack of proper regularization. In this work, we demonstrate highquality in-the-wild shape reconstruction using: (i) a deep encoder as a robust-initializer of the shape latent-code; (ii) regularized test-time optimization of the latent-code; (iii) a deep discriminator as a learned high-dimensional shape prior; (iv) a novel curriculum learning strategy that allows the model to learn shape priors on synthetic data and smoothly transfer them to sparse real world data. Our approach better captures the global structure, performs well on occluded and sparse observations, and registers well with the ground-truth shape. We demonstrate superior performance over state-of-the-art 3D object reconstruction methods on two real-world datasets.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.