Papers
Topics
Authors
Recent
Search
2000 character limit reached

Extreme Parkour with Legged Robots

Published 25 Sep 2023 in cs.RO, cs.AI, cs.CV, cs.LG, cs.SY, and eess.SY | (2309.14341v1)

Abstract: Humans can perform parkour by traversing obstacles in a highly dynamic fashion requiring precise eye-muscle coordination and movement. Getting robots to do the same task requires overcoming similar challenges. Classically, this is done by independently engineering perception, actuation, and control systems to very low tolerances. This restricts them to tightly controlled settings such as a predetermined obstacle course in labs. In contrast, humans are able to learn parkour through practice without significantly changing their underlying biology. In this paper, we take a similar approach to developing robot parkour on a small low-cost robot with imprecise actuation and a single front-facing depth camera for perception which is low-frequency, jittery, and prone to artifacts. We show how a single neural net policy operating directly from a camera image, trained in simulation with large-scale RL, can overcome imprecise sensing and actuation to output highly precise control behavior end-to-end. We show our robot can perform a high jump on obstacles 2x its height, long jump across gaps 2x its length, do a handstand and run across tilted ramps, and generalize to novel obstacle courses with different physical properties. Parkour videos at https://extreme-parkour.github.io/

Citations (110)

Summary

  • The paper presents a dual distillation method that enables a quadrupedal robot to autonomously adjust its heading while navigating diverse parkour obstacles.
  • It demonstrates extreme capabilities, achieving high jumps at twice its height and long jumps across gaps twice its length using a single neural network policy.
  • The approach leverages reinforcement learning to overcome imprecise sensing, offering robust insights for dynamic navigation in unpredictable environments.

Extreme Parkour with Legged Robots: A Technical Overview

The paper, "Extreme Parkour with Legged Robots," authored by Xuxin Cheng, Kexin Shi, Ananye Agarwal, and Deepak Pathak from Carnegie Mellon University, presents an innovative approach to robotic parkour using a low-cost quadrupedal robot equipped with a single front-facing depth camera. This research aims to bridge the gap between highly dynamic human parkour movements and robotic capabilities by employing a learning-based method that forgoes traditional, heavily controlled environments.

Key Contributions

The primary contributions of this work are as follows:

  • The development of a dual distillation method that allows the robot to autonomously adjust its heading direction while navigating diverse parkour obstacles.
  • A simplified, unified reward function based on inner-products to facilitate acquiring a range of parkour skills.
  • The successful demonstration of extreme parkour abilities, such as high jumps two times the robot's height and long jumps across gaps two times its length, all executed using a single neural network policy.

Methodological Innovations

  1. Unified Reward Design: The authors propose a novel approach in designing rewards that rely on inner-products, enabling the robot to learn and execute parkour maneuvers without task-specific engineering of control strategies. This approach enhances the ability of the robot to handle diverse and unexpected obstacles.
  2. Dual Distillation: The methodology includes a two-phase learning process. In Phase 1, the robot learns parkour behaviors in a simulated environment with privileged information. In Phase 2, this knowledge is distilled into a policy using actual depth camera data and inferred environmental conditions, facilitating real-world adaptability.
  3. Handling Imprecise Sensing and Actuation: By leveraging reinforcement learning in simulation, the system compensates for the inherent inaccuracies in sensing and actuation of a low-cost robot. This capability is crucial for achieving precise maneuvers in real-world settings.

Experimental Evaluation

The proposed approach is evaluated through various parkour setups, including high jumps, long jumps, and handstands. The results, as outlined in the paper, demonstrate the robot's capacity to generalize learned behaviors to novel obstacle courses with differing physical properties. Notably, the robot achieved high success rates on complex terrains such as ramps and steps, surpassing comparable methods.

Performance metrics such as Mean X-Displacement and Mean Edge Violation were used to validate the approach both in simulation and real-world environments. The results indicate superior performance when compared to traditional elevation mapping and simpler reward strategies, affirming the efficacy of the authors' novel methodologies.

Implications and Future Directions

The findings of this research have practical implications for deploying legged robots in unpredictable and dynamic environments, enhancing autonomy in tasks such as search and rescue operations or unstructured exploration. Theoretically, the study contributes to the understanding of how high-dimensional image inputs can be directly utilized by neural networks to generate control policies in real-time.

Future research could extend these methodologies to more complex robotic systems involving manipulation tasks, or explore the integration with additional sensory inputs to further improve robustness and reliability. The adaptability of this method to different morphological systems might also be investigated, potentially broadening its application scope within the robotics field.

In summary, this paper presents a significant step towards achieving flexible and adaptive robotic parkour, opening avenues for further exploration in intelligent robotic systems.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.