Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 54 tok/s
Gemini 2.5 Pro 50 tok/s Pro
GPT-5 Medium 18 tok/s Pro
GPT-5 High 31 tok/s Pro
GPT-4o 105 tok/s Pro
Kimi K2 182 tok/s Pro
GPT OSS 120B 466 tok/s Pro
Claude Sonnet 4 40 tok/s Pro
2000 character limit reached

Federated Double Deep Q-learning for Joint Delay and Energy Minimization in IoT networks (2104.11320v1)

Published 2 Apr 2021 in cs.NI and cs.LG

Abstract: In this paper, we propose a federated deep reinforcement learning framework to solve a multi-objective optimization problem, where we consider minimizing the expected long-term task completion delay and energy consumption of IoT devices. This is done by optimizing offloading decisions, computation resource allocation, and transmit power allocation. Since the formulated problem is a mixed-integer non-linear programming (MINLP), we first cast our problem as a multi-agent distributed deep reinforcement learning (DRL) problem and address it using double deep Q-network (DDQN), where the actions are offloading decisions. The immediate cost of each agent is calculated through solving either the transmit power optimization or local computation resource optimization, based on the selected offloading decisions (actions). Then, to enhance the learning speed of IoT devices (agents), we incorporate federated learning (FDL) at the end of each episode. FDL enhances the scalability of the proposed DRL framework, creates a context for cooperation between agents, and minimizes their privacy concerns. Our numerical results demonstrate the efficacy of our proposed federated DDQN framework in terms of learning speed compared to federated deep Q network (DQN) and non-federated DDQN algorithms. In addition, we investigate the impact of batch size, network layers, DDQN target network update frequency on the learning speed of the FDL.

Citations (17)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.