Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 27 tok/s
Gemini 2.5 Pro 46 tok/s Pro
GPT-5 Medium 23 tok/s Pro
GPT-5 High 29 tok/s Pro
GPT-4o 70 tok/s Pro
Kimi K2 117 tok/s Pro
GPT OSS 120B 459 tok/s Pro
Claude Sonnet 4 34 tok/s Pro
2000 character limit reached

Resource Allocation in Large Language Model Integrated 6G Vehicular Networks (2403.19016v1)

Published 27 Mar 2024 in cs.DC, cs.SY, eess.SP, eess.SY, and math.OC

Abstract: In the upcoming 6G era, vehicular networks are shifting from simple Vehicle-to-Vehicle (V2V) communication to the more complex Vehicle-to-Everything (V2X) connectivity. At the forefront of this shift is the incorporation of LLMs into vehicles. Known for their sophisticated natural language processing abilities, LLMs change how users interact with their vehicles. This integration facilitates voice-driven commands and interactions, departing from the conventional manual control systems. However, integrating LLMs into vehicular systems presents notable challenges. The substantial computational demands and energy requirements of LLMs pose significant challenges, especially in the constrained environment of a vehicle. Additionally, the time-sensitive nature of tasks in vehicular networks adds another layer of complexity. In this paper, we consider an edge computing system where vehicles process the initial layers of LLM computations locally, and offload the remaining LLM computation tasks to the Roadside Units (RSUs), envisioning a vehicular ecosystem where LLM computations seamlessly interact with the ultra-low latency and high-bandwidth capabilities of 6G networks. To balance the trade-off between completion time and energy consumption, we formulate a multi-objective optimization problem to minimize the total cost of the vehicles and RSUs. The problem is then decomposed into two sub-problems, which are solved by sequential quadratic programming (SQP) method and fractional programming technique. The simulation results clearly indicate that the algorithm we have proposed is highly effective in reducing both the completion time and energy consumption of the system.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (13)
  1. L. Wen, X. Yang, D. Fu, X. Wang, P. Cai, X. Li, M. Tao, Y. Li, X. Linran, D. Shang et al., “On the road with gpt-4v (ision): Explorations of utilizing visual-language model as autonomous driving agent,” in ICLR 2024 Workshop on Large Language Model (LLM) Agents.
  2. Y. Cui, S. Huang, J. Zhong, Z. Liu, Y. Wang, C. Sun, B. Li, X. Wang, and A. Khajepour, “Drivellm: Charting the path toward full autonomous driving with large language models,” IEEE Transactions on Intelligent Vehicles, 2023.
  3. C. Cui, Y. Ma, X. Cao, W. Ye, Y. Zhou, K. Liang, J. Chen, J. Lu, Z. Yang, K.-D. Liao et al., “A survey on multimodal large language models for autonomous driving,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2024, pp. 958–979.
  4. D. Cai, Y. Wu, S. Wang, F. X. Lin, and M. Xu, “Efficient federated learning for modern nlp,” in Proceedings of the 29th Annual International Conference on Mobile Computing and Networking, 2023, pp. 1–16.
  5. Z. Ning, K. Zhang, X. Wang, L. Guo, X. Hu, J. Huang, B. Hu, and R. Y. Kwok, “Intelligent edge computing in internet of vehicles: A joint computation offloading and caching solution,” IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 4, pp. 2212–2225, 2020.
  6. X. Wang, Z. Ning, and L. Wang, “Offloading in internet of vehicles: A fog-enabled real-time traffic management system,” IEEE Transactions on Industrial Informatics, vol. 14, no. 10, pp. 4568–4578, 2018.
  7. Z. Zhang, S. Lin, M. Dedeoglu, K. Ding, and J. Zhang, “Data-driven distributionally robust optimization for edge intelligence,” in IEEE INFOCOM 2020-IEEE Conference on Computer Communications.   IEEE, 2020, pp. 2619–2628.
  8. D. Narayanan, M. Shoeybi, J. Casper, P. LeGresley, M. Patwary, V. Korthikanti, D. Vainbrand, P. Kashinkunti, J. Bernauer, B. Catanzaro et al., “Efficient large-scale language model training on gpu clusters using megatron-lm,” in Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, 2021, pp. 1–15.
  9. Q. Zeng, Y. Du, K. Huang, and K. K. Leung, “Energy-efficient resource management for federated edge learning with cpu-gpu heterogeneous computing,” IEEE Transactions on Wireless Communications, vol. 20, no. 12, pp. 7947–7962, 2021.
  10. S. Eyerman and L. Eeckhout, “Fine-grained dvfs using on-chip regulators,” ACM Transactions on Architecture and Code Optimization (TACO), vol. 8, no. 1, pp. 1–24, 2011.
  11. P. T. Boggs and J. W. Tolle, “Sequential quadratic programming,” Acta numerica, vol. 4, pp. 1–51, 1995.
  12. M. Grant and S. Boyd, “Cvx: Matlab software for disciplined convex programming, version 2.1,” 2014.
  13. Y. Jong, “An efficient global optimization algorithm for nonlinear sum-of-ratios problem,” Optimization Online, pp. 1–21, 2012.
Citations (4)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

X Twitter Logo Streamline Icon: https://streamlinehq.com

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube