Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Garment Sewing Pattern Reconstruction from a Single Image (2311.04218v1)

Published 7 Nov 2023 in cs.CV, cs.AI, cs.GR, cs.LG, and cs.MM

Abstract: Garment sewing pattern represents the intrinsic rest shape of a garment, and is the core for many applications like fashion design, virtual try-on, and digital avatars. In this work, we explore the challenging problem of recovering garment sewing patterns from daily photos for augmenting these applications. To solve the problem, we first synthesize a versatile dataset, named SewFactory, which consists of around 1M images and ground-truth sewing patterns for model training and quantitative evaluation. SewFactory covers a wide range of human poses, body shapes, and sewing patterns, and possesses realistic appearances thanks to the proposed human texture synthesis network. Then, we propose a two-level Transformer network called Sewformer, which significantly improves the sewing pattern prediction performance. Extensive experiments demonstrate that the proposed framework is effective in recovering sewing patterns and well generalizes to casually-taken human photos. Code, dataset, and pre-trained models are available at: https://sewformer.github.io.

Citations (14)

Summary

  • The paper introduces the SewFactory dataset and Transformer-based Sewformer framework for reconstructing garment sewing patterns from single images.
  • It leverages a two-level Transformer decoding mechanism to process panel and edge representations, achieving significant improvements over prior methods.
  • The approach enables realistic texture synthesis and accurate pattern editing, offering practical applications in virtual try-on and fashion design.

Toward Garment Sewing Pattern Reconstruction from a Single Image: An Analysis

The paper "Towards Garment Sewing Pattern Reconstruction from a Single Image" presents a compelling framework for inferring garment sewing patterns from everyday photographs. The primary motivation behind this research is the potential implications across various industries, such as fashion design, virtual try-on, and digital avatars, which would benefit from an accessible way to extract intricate garment details from a single image.

A key component of the work is the introduction of a comprehensive dataset named SewFactory, which offers a substantial number of image and ground-truth sewing pattern pairs—approximately one million in total. This dataset spans a wide spectrum of garment types, human poses, and textures, ensuring robust model training and evaluation. The high degree of variability and realism in SewFactory is an important feature, addressing a notable gap left by existing datasets that often lack diversity in garment types or adequate annotations.

The method proposed in this work is remarkably innovative, leveraging two central contributions: the SewFactory dataset and a novel Transformer-based framework named Sewformer. Sewformer operates through a two-level Transformer decoding mechanism precisely aligned with the hierarchical nature of sewing pattern data. This architecture separately processes panel and edge representations within garments, a design choice underscored by the effectiveness revealed in empirical evaluations.

In terms of results, the paper reports significant improvements over existing approaches like NeuralTailor, a state-of-the-art method focused on sewing pattern reconstruction from 3D data. Sewformer demonstrates exceptional performance, reducing errors in predicted shapes, rotations, and translations of garment panels while also achieving high scores in precision, recall, and F1 metrics for stitching relations. This solid numerical performance is a testament to Sewformer's adeptness when tasked with irregular and variable data structures.

Another noteworthy innovation is a data-driven texture synthesis network for producing photorealistic human body appearances. This network mitigates common issues in existing synthetic datasets, such as artifact-laden and unrealistic textures, thereby smoothing the transition from research datasets to potential real-world applications. The qualitative and quantitative analyses suggest that the method effectively narrows domain gaps, bolstering generalization to real-world photos.

The paper further highlights the real-world applicability of the proposed methods. With the reconstructed sewing patterns, users can perform accurate garment reproduction and editing for virtual try-ons and modifications, such as altering garment textures or human poses. This flexibility is of significant use in practical settings, suggesting that this approach is poised to influence procedural garment design and related applications substantially.

Moving forward, key areas merit further exploration. Although the paper's contributions effectively bridge several existing gaps in garment pattern reconstruction, future studies could explore enriching the dataset with broader garment styles and more complex interactions between clothing items and human anatomies. Moreover, the methodology could be extended to better handle unseen garment accessories or non-standard garment styles.

In conclusion, while the paper makes a substantial advance in garment sewing pattern reconstruction, each element—from the novel SewFactory dataset, the transformative Sewformer architecture, to the human texture synthesis—throws open doors for further innovation. Future enhancements could have significant ramifications, encapsulating developments in AI that streamline fashion design, retail, and animated avatars, among other pioneering applications.

Github Logo Streamline Icon: https://streamlinehq.com