Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 39 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 12 tok/s Pro
GPT-5 High 18 tok/s Pro
GPT-4o 91 tok/s Pro
Kimi K2 191 tok/s Pro
GPT OSS 120B 456 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Exploring the Capabilities and Limitations of Large Language Models in the Electric Energy Sector (2403.09125v5)

Published 14 Mar 2024 in eess.SY and cs.SY

Abstract: LLMs as chatbots have drawn remarkable attention thanks to their versatile capability in natural language processing as well as in a wide range of tasks. While there has been great enthusiasm towards adopting such foundational model-based artificial intelligence tools in all sectors possible, the capabilities and limitations of such LLMs in improving the operation of the electric energy sector need to be explored, and this article identifies fruitful directions in this regard. Key future research directions include data collection systems for fine-tuning LLMs, embedding power system-specific tools in the LLMs, and retrieval augmented generation (RAG)-based knowledge pool to improve the quality of LLM responses and LLMs in safety-critical use cases.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (14)
  1. “Attention is all you need” In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17 Long Beach, California, USA: Curran Associates Inc., 2017, pp. 6000–6010 URL: https://dl.acm.org/doi/10.5555/3295222.3295349
  2. “Improving Language Understanding by Generative Pre-Training” In OpenAI, 2018 URL: https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf
  3. “Large Foundation Models for Power Systems” In arXiv, 2023 URL: https://doi.org/10.48550/arXiv.2312.07044
  4. “On the Potential of ChatGPT to Generate Distribution Systems for Load Flow Studies Using OpenDSS” In IEEE Trans. Power Syst. 38.6, 2023, pp. 5965–5968 DOI: 10.1109/TPWRS.2023.3315543
  5. “Real-Time Optimal Power Flow With Linguistic Stipulations: Integrating GPT-Agent and Deep Reinforcement Learning” In IEEE Trans. Power Syst. 39.2, 2024, pp. 4747–4750 DOI: 10.1109/TPWRS.2023.3338961
  6. “Data Governance in the Age of Large-Scale Data-Driven Language Technology” In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’22 Seoul, Republic of Korea: Association for Computing Machinery, 2022, pp. 2206–2222 URL: https://doi.org/10.1145/3531146.3534637
  7. “Privacy in Large Language Models: Attacks, Defenses and Future Directions” In arXiv, 2023 URL: https://doi.org/10.48550/arXiv.2310.10383
  8. “A Survey of Safety and Trustworthiness of Large Language Models through the Lens of Verification and Validation” In arXiv, 2023 URL: https://doi.org/10.48550/arXiv.2305.11391
  9. OpenAI “Enterprise Privacy at OpenAI” Accessed: 13/03/2024, 2023 URL: https://openai.com/enterprise-privacy
  10. “What’s in the chatterbox? Large language models, why they matter, and what we should do about them”, 2022 URL: https://stpp.fordschool.umich.edu/research/research-report/whats-in-the-chatterbox
  11. “NIST AI Risk Management Framework” URL: https://www.nist.gov/itl/ai-risk-management-framework
  12. “From Understanding to Utilization: A Survey on Explainability for Large Language Models” In arXiv, 2024 URL: https://doi.org/10.48550/arXiv.2401.12874
  13. “Learning unsupervised world models for autonomous driving via discrete diffusion” In arXiv, 2023 URL: https://doi.org/10.48550/arXiv.2311.01017
  14. “A survey on large language model (llm) security and privacy: The good, the bad, and the ugly” In High-Confidence Computing Elsevier, 2024, pp. 100211 DOI: https://doi.org/10.1016/j.hcc.2024.100211
Citations (32)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Youtube Logo Streamline Icon: https://streamlinehq.com