Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adaptive Random Forests for Energy-Efficient Inference on Microcontrollers (2205.13838v1)

Published 27 May 2022 in cs.LG

Abstract: Random Forests (RFs) are widely used Machine Learning models in low-power embedded devices, due to their hardware friendly operation and high accuracy on practically relevant tasks. The accuracy of a RF often increases with the number of internal weak learners (decision trees), but at the cost of a proportional increase in inference latency and energy consumption. Such costs can be mitigated considering that, in most applications, inputs are not all equally difficult to classify. Therefore, a large RF is often necessary only for (few) hard inputs, and wasteful for easier ones. In this work, we propose an early-stopping mechanism for RFs, which terminates the inference as soon as a high-enough classification confidence is reached, reducing the number of weak learners executed for easy inputs. The early-stopping confidence threshold can be controlled at runtime, in order to favor either energy saving or accuracy. We apply our method to three different embedded classification tasks, on a single-core RISC-V microcontroller, achieving an energy reduction from 38% to more than 90% with a drop of less than 0.5% in accuracy. We also show that our approach outperforms previous adaptive ML methods for RFs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Francesco Daghero (15 papers)
  2. Alessio Burrello (52 papers)
  3. Chen Xie (27 papers)
  4. Luca Benini (363 papers)
  5. Andrea Calimera (11 papers)
  6. Enrico Macii (37 papers)
  7. Massimo Poncino (34 papers)
  8. Daniele Jahier Pagliari (46 papers)
Citations (8)

Summary

We haven't generated a summary for this paper yet.