Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 47 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 28 tok/s Pro
GPT-5 High 25 tok/s Pro
GPT-4o 104 tok/s Pro
Kimi K2 156 tok/s Pro
GPT OSS 120B 474 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Iterative Thresholding and Projection Algorithms and Model-Based Deep Neural Networks for Sparse LQR Control Design (2212.02929v2)

Published 6 Dec 2022 in cs.DC, cs.SY, and eess.SY

Abstract: In this paper, we consider an LQR design problem for distributed control systems. For large-scale distributed systems, finding a solution might be computationally demanding due to communications among agents. To this aim, we deal with LQR minimization problem with a regularization for sparse feedback matrix, which can lead to achieve the reduction of the communication links in the distributed control systems. For this work, we introduce simple but efficient iterative algorithms -- Iterative Shrinkage Thresholding Algorithm (ISTA) and Iterative Sparse Projection Algorithm (ISPA). They can give us a trade-off solution between LQR cost and sparsity level on feedback matrix. Moreover, in order to improve the speed of the proposed algorithms, we design deep neural network models based on the proposed iterative algorithms. Numerical experiments demonstrate that our algorithms can outperform the previous methods using the Alternating Direction Method of Multiplier (ADMM) [2] and the Gradient Support Pursuit (GraSP) [3], and their deep neural network models can improve the performance of the proposed algorithms in convergence speed.

Citations (2)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.

Authors (1)