Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 167 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 24 tok/s Pro
GPT-5 High 29 tok/s Pro
GPT-4o 86 tok/s Pro
Kimi K2 205 tok/s Pro
GPT OSS 120B 448 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

A Framework for Energy and Carbon Footprint Analysis of Distributed and Federated Edge Learning (2103.10346v1)

Published 18 Mar 2021 in cs.LG and cs.DC

Abstract: Recent advances in distributed learning raise environmental concerns due to the large energy needed to train and move data to/from data centers. Novel paradigms, such as federated learning (FL), are suitable for decentralized model training across devices or silos that simultaneously act as both data producers and learners. Unlike centralized learning (CL) techniques, relying on big-data fusion and analytics located in energy hungry data centers, in FL scenarios devices collaboratively train their models without sharing their private data. This article breaks down and analyzes the main factors that influence the environmental footprint of FL policies compared with classical CL/Big-Data algorithms running in data centers. The proposed analytical framework takes into account both learning and communication energy costs, as well as the carbon equivalent emissions; in addition, it models both vanilla and decentralized FL policies driven by consensus. The framework is evaluated in an industrial setting assuming a real-world robotized workplace. Results show that FL allows remarkable end-to-end energy savings (30%-40%) for wireless systems characterized by low bit/Joule efficiency (50 kbit/Joule or lower). Consensus-driven FL does not require the parameter server and further reduces emissions in mesh networks (200 kbit/Joule). On the other hand, all FL policies are slower to converge when local data are unevenly distributed (often 2x slower than CL). Energy footprint and learning loss can be traded off to optimize efficiency.

Citations (14)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube