Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 99 tok/s
Gemini 2.5 Pro 43 tok/s Pro
GPT-5 Medium 33 tok/s Pro
GPT-5 High 30 tok/s Pro
GPT-4o 110 tok/s Pro
Kimi K2 207 tok/s Pro
GPT OSS 120B 467 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

A 64mW DNN-based Visual Navigation Engine for Autonomous Nano-Drones (1805.01831v4)

Published 4 May 2018 in cs.RO, cs.AI, cs.NE, and eess.SP

Abstract: Fully-autonomous miniaturized robots (e.g., drones), with AI based visual navigation capabilities are extremely challenging drivers of Internet-of-Things edge intelligence capabilities. Visual navigation based on AI approaches, such as deep neural networks (DNNs) are becoming pervasive for standard-size drones, but are considered out of reach for nanodrones with size of a few cm${}\mathrm{2}$. In this work, we present the first (to the best of our knowledge) demonstration of a navigation engine for autonomous nano-drones capable of closed-loop end-to-end DNN-based visual navigation. To achieve this goal we developed a complete methodology for parallel execution of complex DNNs directly on-bard of resource-constrained milliwatt-scale nodes. Our system is based on GAP8, a novel parallel ultra-low-power computing platform, and a 27 g commercial, open-source CrazyFlie 2.0 nano-quadrotor. As part of our general methodology we discuss the software mapping techniques that enable the state-of-the-art deep convolutional neural network presented in [1] to be fully executed on-board within a strict 6 fps real-time constraint with no compromise in terms of flight results, while all processing is done with only 64 mW on average. Our navigation engine is flexible and can be used to span a wide performance range: at its peak performance corner it achieves 18 fps while still consuming on average just 3.5% of the power envelope of the deployed nano-aircraft.

Citations (153)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

  • The paper presents a fully integrated DNN-based visual navigation engine that enables closed-loop, real-time autonomous flight on nano-drones within a strict 64mW power budget.
  • It employs the GAP8 SoC and advanced software mapping techniques to execute CNN algorithms efficiently, achieving up to 18 frames per second.
  • The approach delivers approximately 2.5 times higher performance with over 21% reduced power consumption compared to traditional MCU systems, broadening IoT and autonomous exploration applications.

A 64mW DNN-based Visual Navigation Engine for Autonomous Nano-Drones

The paper entitled "A 64mW DNN-based Visual Navigation Engine for Autonomous Nano-Drones" presents a significant advancement in the field of autonomous unmanned aerial vehicles (UAVs) by demonstrating a complete deployment of a deep neural network (DNN) for visual navigation on a nano-drone platform. This work addresses a key challenge in robotics and AI: the implementation of sophisticated AI algorithms, specifically CNNs, onto resource-constrained nano-UAVs that operate within a limited power envelope.

The authors developed a navigation engine capable of performing closed-loop, end-to-end visual navigation on a nano-UAV, utilizing the GAP8 SoC, a parallel ultra-low-power computing platform. This system is implemented on a CrazyFlie 2.0 nano-quadrotor weighing merely 27 grams. The novelty of this work lies in the successful deployment of complex DNNs on a resource-constrained device, maintaining robust performance without compromising the UAV's flight dynamics.

The authors present a comprehensive methodology for parallel execution of DNNs onboard the UAV. The paper details the optimization techniques for software mapping and parallelization, enabling efficient execution of CNN-based navigation algorithms. Importantly, the system achieves real-time performance within the constraints of a 64mW power budget, while ensuring computational efficiency and maintaining a high throughput of up to 18 frames per second at peak configuration.

In terms of numerical results, the solution achieves a throughput of 702 MMAC/s at a power consumption of 272 mW, far exceeding the capabilities of traditional MCUs such as STM32H7. The paper highlights that the GAP8-based solution is more energy-efficient, delivering approximately 2.5 times better performance with over 21% less power consumption than conventional systems.

The implications of this research are noteworthy for both theoretical and practical applications. Theoretically, it showcases the potential of parallel ultra-low-power computing for deploying complex AI workloads on embedded devices. Practically, this work enables the integration of advanced perception and navigation systems into miniaturized UAVs, extending the operational domain of these platforms to IoT applications, indoor surveillance, and autonomous exploration in urban environments.

Looking forward, the methodologies demonstrated in this work could be extended to facilitate more complex navigation and sensing tasks on nano-drones, potentially incorporating features such as dynamic obstacle avoidance and cooperative multi-drone operations. The open-source release of the PULP-Shield design and the accompanying software artifacts will further stimulate research and development in the domain of autonomous nano-UAVs and intelligent IoT nodes.

This research aligns itself with the evolving landscape of AI in edge computing, emphasizing the deployment of intelligent systems in real-world applications where power efficiency and computational capability are critical. The paper sets a precedent for future developments in deploying DNNs on resource-constrained platforms, reinforcing the feasibility of AI-driven autonomy in small-scale aerial vehicles.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.

Youtube Logo Streamline Icon: https://streamlinehq.com

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube