Emergent Mind

Garment3DGen: 3D Garment Stylization and Texture Generation

(2403.18816)
Published Mar 27, 2024 in cs.CV

Abstract

We introduce Garment3DGen a new method to synthesize 3D garment assets from a base mesh given a single input image as guidance. Our proposed approach allows users to generate 3D textured clothes based on both real and synthetic images, such as those generated by text prompts. The generated assets can be directly draped and simulated on human bodies. First, we leverage the recent progress of image to 3D diffusion methods to generate 3D garment geometries. However, since these geometries cannot be utilized directly for downstream tasks, we propose to use them as pseudo ground-truth and set up a mesh deformation optimization procedure that deforms a base template mesh to match the generated 3D target. Second, we introduce carefully designed losses that allow the input base mesh to freely deform towards the desired target, yet preserve mesh quality and topology such that they can be simulated. Finally, a texture estimation module generates high-fidelity texture maps that are globally and locally consistent and faithfully capture the input guidance, allowing us to render the generated 3D assets. With Garment3DGen users can generate the textured 3D garment of their choice without the need of artist intervention. One can provide a textual prompt describing the garment they desire to generate a simulation-ready 3D asset. We present a plethora of quantitative and qualitative comparisons on various assets both real and generated and provide use-cases of how one can generate simulation-ready 3D garments.

Garment3DGen~ generates textured 3D garments from images, sketches, or text, enabling VR interactions.

Overview

  • Garment3DGen provides a fully automated method to transform images or textual prompts into simulation-ready 3D garment assets, streamlining the creation process.

  • The system leverages image-to-3D diffusion methods and mesh deformation optimization to maintain high-quality topology suitable for physics-based simulations.

  • It introduces novel techniques in mesh deformation and texture generation, enabling detailed and realistic rendering of both real and imagined garments.

  • Experiments demonstrate Garment3DGen's superior fidelity and mesh quality, suggesting its potential to revolutionize 3D content creation in various virtual applications.

Garment3DGen: Automating the Generation of Simulation-Ready 3D Garments

Introduction

The advancement of 3D asset creation plays a pivotal role in diverse industries ranging from gaming and movies to fashion and virtual reality. The creation of simulation-ready garments, in particular, has posed significant challenges due to the intricate requirement for manual design, draping, and topology optimization. Garment3DGen introduces a fully automated method transforming base garment meshes into simulation-ready 3D assets directly from images or textual prompts, significantly facilitating the asset generation process that traditionally demands specialized software and expertise. This method not only commoditizes content creation but also paves the way for rapid asset generation and broadens its application spectrum to include physics-based simulation and hand-cloth interaction in virtual reality environments.

Methodology

Generating Pseudo Ground-truth for Mesh Deformation

Garment3DGen leverages the recent advancements in image-to-3D diffusion methods, producing 3D garment geometries from single input images or synthetic images generated by text prompts. These geometries, though not directly utilizable for downstream tasks due to their coarse nature, serve as pseudo ground-truth. A mesh deformation optimization process deforms a base template mesh to match these generated targets, maintaining mesh quality and topology for simulation purposes. This is further augmented by a texture estimation module, which generates globally and locally consistent high-fidelity texture maps from the input guidance, enabling realistic rendering of the generated 3D assets.

Novel Approaches in Mesh Deformation and Texture Generation

Key contributions of Garment3DGen include introducing direct 3D space geometry supervisions using pseudo ground-truth, carefully designed losses to ensure mesh deformations preserve quality for simulations, and a texture enhancement module for generating detailed UV textures. Notably, the system is capable of generating textures from a single image, enabling the creation of both real and fantastical garments from simple textual descriptions or images. Additionally, a novel body-cloth optimization framework is proposed, facilitating the fitting of generated garments on parametric body models for accurate simulation in various scenarios.

Experimental Outcomes

Experiments conducted showcase the method's capability to generate high-quality, textured 3D garments from images or text, adhering closely to the input while maintaining necessary properties for simulation. Quantitative comparisons demonstrate superior performance over existing approaches in terms of fidelity to input images and mesh quality suitable for downstream applications. Qualitative results highlight the method's versatility in handling various garment types, preserving fine details and realistic textures, underscoring its potential to revolutionize content creation in virtual environments.

Theoretical and Practical Implications

Garment3DGen presents significant theoretical contributions by integrating image-to-3D diffusion methods with mesh deformation techniques, underpinned by practical implications that extend to animating avatars, performing physics-based cloth simulation, and virtual reality applications. This framework sets a foundation for future developments in AI-driven content creation, particularly in automated generation of simulation-ready 3D garments and assets.

Future Directions

While Garment3DGen marks a significant stride in 3D garment generation, future work may focus on expanding the diversity of base meshes, enhancing texture detail preservation, and improving the efficiency of the generation process. The method's current limitations, such as the dependency on base mesh similarity and texture detail loss in certain scenarios, present opportunities for advancing this research frontier, promising further innovations in AI-driven 3D content creation.

Conclusion

Garment3DGen represents a notable advancement in the automation of simulation-ready 3D garment generation from mere images or text prompts. It offers a comprehensive solution encompassing mesh deformation, texture generation, and garment-body fitting, well suited for downstream simulation and VR applications. This research not only addresses existing gaps in 3D garment creation but also heralds a new era of rapid, high-quality content creation accessible to both novices and experts, fostering innovation across various digital realms.

Create an account to read this summary for free:

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.

YouTube