Emergent Mind

Granite Code Models: A Family of Open Foundation Models for Code Intelligence

(2405.04324)
Published May 7, 2024 in cs.AI , cs.CL , and cs.SE

Abstract

LLMs trained on code are revolutionizing the software development process. Increasingly, code LLMs are being integrated into software development environments to improve the productivity of human programmers, and LLM-based agents are beginning to show promise for handling complex tasks autonomously. Realizing the full potential of code LLMs requires a wide range of capabilities, including code generation, fixing bugs, explaining and documenting code, maintaining repositories, and more. In this work, we introduce the Granite series of decoder-only code models for code generative tasks, trained with code written in 116 programming languages. The Granite Code models family consists of models ranging in size from 3 to 34 billion parameters, suitable for applications ranging from complex application modernization tasks to on-device memory-constrained use cases. Evaluation on a comprehensive set of tasks demonstrates that Granite Code models consistently reaches state-of-the-art performance among available open-source code LLMs. The Granite Code model family was optimized for enterprise software development workflows and performs well across a range of coding tasks (e.g. code generation, fixing and explanation), making it a versatile all around code model. We release all our Granite Code models under an Apache 2.0 license for both research and commercial use.

Granite Code models show improved function-calling accuracy with larger sizes on Berkeley Leaderboard. Best in color.

Overview

  • Granite Code models are a series of decoder-only LLMs trained on programming data from 116 languages, excelling in various software development tasks.

  • The models employ advanced techniques like RoPE embeddings and Grouped Query Attention, optimized for different sizes to enhance efficiency in coding tasks.

  • Granite Code models promise significant advancements in AI-driven software development, with potential for future expansion and open-source contributions under the Apache 2.0 license.

Understanding Granite Code Models: Enhancements in AI-driven Software Development

Overview of Granite Code Models

Granite Code models present an innovative series of decoder-only LLMs trained on a diverse programming language dataset, showcasing strong performance across a range of software development tasks. These models range from 3 to 34 billion parameters, offering scalable solutions for both complex and constrained software environments.

Training Data and Model Architecture

The Granite Code models are trained on a dataset composed of code from 116 different programming languages, accumulating 3 to 4 trillion tokens. This training process is broken down into two main phases, emphasizing a comprehensive grasp of both coding languages and their syntactical nuances.

The architecture of Granite Code models is based on a transformer decoder setup, with configurations adjusted to optimize performance across different model sizes. Various model sizes incorporate unique techniques like RoPE embeddings, Grouped Query Attention (GQA), and Multi-Query Attention (MQA) for better efficiency and effectiveness.

Unique Features and Performance

The significant features of Granite Code models include:

  • Versatility Across Coding Tasks: The models excel not just in code generation but also in fixing bugs, generating documentation, and other critical tasks that are part of the software development workflow.
  • Optimized for Enterprise Use: They are particularly tuned to the needs of enterprise systems, managing to deliver reliable performance while ensuring data transparency and adherence to licensing requirements.

Evaluation Across Benchmarks

Granite Code models were extensively evaluated against several benchmarks, typically outperforming or matching state-of-the-art open-source models. For instance, they show notable results on benchmarks like HumanEvalPack and MultiPL-E by surpassing other models in tasks ranging from code generation across various languages to bug fixing and code explanations.

Future Directions and Accessibility

Looking ahead, Granite Code intends to extend the capabilities of these models further with updates that could include long-context models for more complex tasks, and versions concentrated on specific programming languages like Python or Java.

Additionally, a commitment to open-source is evident as all models are released under the Apache 2.0 license, broadening the scope for both academic research and industrial application, paving the path for broader adoption and continuous improvement.

The inclusion of these models into software development tools could significantly streamline coding tasks, reduce the incidence of human error, and subsequently decrease development times, highlighting the practical implications of integrating AI into everyday coding environments.

Implications for Data Scientists

For data scientists and developers, Granite Code models offer a robust tool that enhances productivity by automating a significant portion of coding workflows. The versatility across various programming languages and tasks makes these models an invaluable resource, especially in environments where quick adaptation to different languages or fast problem-solving capabilities are crucial.

Conclusion

In conclusion, Granite Code models mark a significant step forward in AI-driven software development. By combining robust performance with enterprise-focused features, they propose actionable solutions for integrating AI into software development, promising substantial gains in efficiency and effectiveness in coding practices across various sectors.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.

YouTube