Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SimulCap : Single-View Human Performance Capture with Cloth Simulation (1903.06323v2)

Published 15 Mar 2019 in cs.CV

Abstract: This paper proposes a new method for live free-viewpoint human performance capture with dynamic details (e.g., cloth wrinkles) using a single RGBD camera. Our main contributions are: (i) a multi-layer representation of garments and body, and (ii) a physics-based performance capture procedure. We first digitize the performer using multi-layer surface representation, which includes the undressed body surface and separate clothing meshes. For performance capture, we perform skeleton tracking, cloth simulation, and iterative depth fitting sequentially for the incoming frame. By incorporating cloth simulation into the performance capture pipeline, we can simulate plausible cloth dynamics and cloth-body interactions even in the occluded regions, which was not possible in previous capture methods. Moreover, by formulating depth fitting as a physical process, our system produces cloth tracking results consistent with the depth observation while still maintaining physical constraints. Results and evaluations show the effectiveness of our method. Our method also enables new types of applications such as cloth retargeting, free-viewpoint video rendering and animations.

Citations (111)

Summary

  • The paper introduces a novel multi-layer representation that separates the body and garments for more accurate performance capture.
  • It employs an integrated approach combining skeleton tracking, iterative depth fitting, and physics-based cloth simulation.
  • Empirical results demonstrate robust reconstruction in both visible and occluded regions, enabling advanced applications in AR and virtual reality.

SimulCap: Single-View Human Performance Capture with Cloth Simulation

The paper "SimulCap: Single-View Human Performance Capture with Cloth Simulation" presents a novel method for capturing human performance with dynamic cloth details using a single RGBD camera. The authors, Tao Yu et al., propose a multi-layer representation approach combined with a physics-based capture procedure. This approach separates the body and clothing into distinct, multi-layer surfaces, a crucial advance over existing technologies which typically conflate these elements into a single geometric entity. This paper emphasizes two core contributions: a multi-layer representation of garments and body, and a physics-based procedure to capture performance.

The methodology involves digitizing the performer using a multi-layer surface representation that includes separate meshes for the body and clothing. The process integrates skeleton tracking, cloth simulation, and iterative depth fitting, which operates sequentially for each incoming frame. This integration of cloth simulation uniquely allows for plausible simulation of dynamic cloth-body interactions, even in occluded areas, and addresses the shortcomings of previous methods that could not accurately reconstruct such interactions. The paper further establishes depth fitting as a physical process, thereby ensuring that cloth tracking adheres to both observed depth data and physical constraints.

The paper reports strong empirical results obtained from the incorporation of physics-based cloth simulation. Such results not only demonstrate the method's effectiveness in capturing visible regions realistically but also showcase its strength in reconstructing plausible cloth dynamics in occluded regions. The implications of the SimulCap approach suggest significant enhancements in applications such as augmented reality, virtual dressing, and free-viewpoint video rendering and animations.

The paper notes several limitations, such as challenges intrinsic to tracking very thick clothing and realistic cloth-body interactions given intricate body motions. It suggests future work might focus on incorporating advanced cloth simulation techniques and developing models to better handle topology changes, soft tissues, faces, and hands.

Practical implications are profound, particularly in sectors where real-time, realistic digital avatars are crucial. The design fluidity enabled by the separate garment-body representation allows for new applications such as cloth retargeting and personalization in virtual environments. Moreover, this work could significantly impact domains like telepresence and virtual reality, where precise and natural interaction between virtual clothing and human bodies is critical.

In an evolving AI landscape, future developments may see improvements in the incorporation of learned dynamics and material properties. This could lead to increasingly realistic and computationally efficient simulations, providing even richer interactive experiences in virtual systems.

Youtube Logo Streamline Icon: https://streamlinehq.com