Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 31 tok/s Pro
GPT-5 High 35 tok/s Pro
GPT-4o 112 tok/s Pro
Kimi K2 188 tok/s Pro
GPT OSS 120B 442 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Should AI Optimize Your Code? A Comparative Study of Classical Optimizing Compilers Versus Current Large Language Models (2406.12146v2)

Published 17 Jun 2024 in cs.AI, cs.PF, and cs.SE

Abstract: Traditional optimizing compilers have played an important role in adapting to the growing complexity of modern software systems. The need for efficient parallel programming in current architectures requires strong optimization techniques. The beginning of LLMs raises intriguing questions about the potential of these AI approaches to revolutionize code optimization methodologies. This work aims to answer an essential question for the compiler community: "Can AI-driven models revolutionize the way we approach code optimization?". To address this question, we present a comparative analysis between three classical optimizing compilers and two recent LLMs, evaluating their respective abilities and limitations in optimizing code for maximum efficiency. In addition, we introduce a benchmark suite of challenging optimization patterns and an automatic mechanism for evaluating the performance and correctness of the code generated by LLMs. We used three different prompting strategies to evaluate the performance of the LLMs, Simple Instruction (IP), Detailed Instruction Prompting (DIP), and Chain of Thought (CoT). A key finding is that while LLMs have the potential to outperform current optimizing compilers, they often generate incorrect code on large code sizes, calling for automated verification methods. In addition, expressing a compiler strategy as part of the LLMs prompt substantially improves its overall performance. Our evaluation across three benchmark suites shows CodeLlama-70B as the superior LLM, capable of achieving speedups of up to x1.75. Additionally, CETUS is the best among the current optimizing compilers, achieving a maximum speedup of 1.67x. We also found substantial differences among the three prompting strategies.

Citations (1)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

HackerNews