Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
11 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Should AI Optimize Your Code? A Comparative Study of Classical Optimizing Compilers Versus Current Large Language Models (2406.12146v2)

Published 17 Jun 2024 in cs.AI, cs.PF, and cs.SE

Abstract: Traditional optimizing compilers have played an important role in adapting to the growing complexity of modern software systems. The need for efficient parallel programming in current architectures requires strong optimization techniques. The beginning of LLMs raises intriguing questions about the potential of these AI approaches to revolutionize code optimization methodologies. This work aims to answer an essential question for the compiler community: "Can AI-driven models revolutionize the way we approach code optimization?". To address this question, we present a comparative analysis between three classical optimizing compilers and two recent LLMs, evaluating their respective abilities and limitations in optimizing code for maximum efficiency. In addition, we introduce a benchmark suite of challenging optimization patterns and an automatic mechanism for evaluating the performance and correctness of the code generated by LLMs. We used three different prompting strategies to evaluate the performance of the LLMs, Simple Instruction (IP), Detailed Instruction Prompting (DIP), and Chain of Thought (CoT). A key finding is that while LLMs have the potential to outperform current optimizing compilers, they often generate incorrect code on large code sizes, calling for automated verification methods. In addition, expressing a compiler strategy as part of the LLMs prompt substantially improves its overall performance. Our evaluation across three benchmark suites shows CodeLlama-70B as the superior LLM, capable of achieving speedups of up to x1.75. Additionally, CETUS is the best among the current optimizing compilers, achieving a maximum speedup of 1.67x. We also found substantial differences among the three prompting strategies.

Citations (1)

Summary

We haven't generated a summary for this paper yet.

HackerNews