Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 56 tok/s
Gemini 2.5 Pro 39 tok/s Pro
GPT-5 Medium 15 tok/s Pro
GPT-5 High 16 tok/s Pro
GPT-4o 99 tok/s Pro
Kimi K2 155 tok/s Pro
GPT OSS 120B 476 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

In-context learning of state estimators (2312.04509v1)

Published 7 Dec 2023 in eess.SY and cs.SY

Abstract: State estimation has a pivotal role in several applications, including but not limited to advanced control design. Especially when dealing with nonlinear systems state estimation is a nontrivial task, often entailing approximations and challenging fine-tuning phases. In this work, we propose to overcome these challenges by formulating an in-context state-estimation problem, enabling us to learn a state estimator for a class of (nonlinear) systems abstracting from particular instances of the state seen during training. To this end, we extend an in-context learning framework recently proposed for system identification, showing via a benchmark numerical example that this approach allows us to (i) use training data directly for the design of the state estimator, (ii) not requiring extensive fine-tuning procedures, while (iii) achieving superior performance compared to state-of-the-art benchmarks.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (18)
  1. Optimizing process economics online using model predictive control. Computers & Chemical Engineering, 58, 334–343.
  2. Casadi: a software framework for nonlinear optimization and optimal control. Mathematical Programming Computation, 11, 1–36.
  3. A survey of kalman filter algorithms and variants in state estimation. Current Approaches in Science and Technology Research Vol. 15, 1–14.
  4. Meta-learning for model-reference data-driven control. arXiv preprint arXiv:2308.15458.
  5. Meta-learning of neural state-space models using data from similar systems. IFAC-PapersOnLine, 56(2), 1490–1495.
  6. Weak in the nees?: Auto-tuning kalman filters with bayesian optimization. 2018 21st International Conference on Information Fusion (FUSION), 1072–1079.
  7. A survey for in-context learning. arXiv preprint arXiv:2301.00234.
  8. From system models to class models: An in-context learning paradigm. IEEE Control Systems Letters, 1–1. 10.1109/LCSYS.2023.3335036.
  9. Danse: Data-driven non-linear state estimation of model-free process in unsupervised bayesian setup. In 2023 31st European Signal Processing Conference (EUSIPCO), 870–874.
  10. Unscented filtering and nonlinear estimation. Proceedings of the IEEE, 92(3), 401–422.
  11. Labbe, R. (2014). Kalman and bayesian filters in python. Chap, 7(246), 4.
  12. Direct data-driven filter design for uncertain lti systems with bounded noise. Automatica, 46(11), 1773–1784.
  13. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32.
  14. Language models are unsupervised multitask learners. OpenAI blog, 1(8), 9.
  15. Rapid learning or feature reuse? towards understanding the effectiveness of maml. arXiv preprint arXiv:1909.09157.
  16. Control-oriented meta-learning. The International Journal of Robotics Research.
  17. Vanschoren, J. (2018). Meta-learning: A survey. arXiv preprint arXiv:1810.03548.
  18. Attention is all you need. Advances in neural information processing systems, 30.
Citations (2)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.