Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
157 tokens/sec
GPT-4o
43 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Empirical Study on Learning Bug-Fixing Patches in the Wild via Neural Machine Translation (1812.08693v2)

Published 20 Dec 2018 in cs.SE

Abstract: Millions of open-source projects with numerous bug fixes are available in code repositories. This proliferation of software development histories can be leveraged to learn how to fix common programming bugs. To explore such a potential, we perform an empirical study to assess the feasibility of using Neural Machine Translation techniques for learning bug-fixing patches for real defects. First, we mine millions of bug-fixes from the change histories of projects hosted on GitHub, in order to extract meaningful examples of such bug-fixes. Next, we abstract the buggy and corresponding fixed code, and use them to train an Encoder-Decoder model able to translate buggy code into its fixed version. In our empirical investigation we found that such a model is able to fix thousands of unique buggy methods in the wild. Overall, this model is capable of predicting fixed patches generated by developers in 9-50% of the cases, depending on the number of candidate patches we allow it to generate. Also, the model is able to emulate a variety of different Abstract Syntax Tree operations and generate candidate patches in a split second.

Citations (322)

Summary

  • The paper shows that NMT can learn to predict bug-fixing patches by training on 787K GitHub bug-fixing commits.
  • It employs fine-grained differencing with abstraction and tokenization to convert source code into a learnable format.
  • Results indicate up to 50% accuracy for small methods with over 82% syntactic correctness, underscoring practical feasibility.

Overview of Learning Bug-Fixing Patches via Neural Machine Translation

The paper "An Empirical Study on Learning Bug-Fixing Patches in the Wild via Neural Machine Translation" presents an empirical paper aimed at exploring the feasibility of leveraging Neural Machine Translation (NMT) for automated bug-fixing. The authors investigate the potential of NMT to automatically generate bug-fixing patches by learning from real-world data extracted from open-source repositories.

Methodology and Dataset

The research relies on a substantial dataset mined from GitHub, consisting of approximately 787,000 bug-fixing commits. From these, 2.3 million bug-fix pairs (BFPs) were extracted by the method of fine-grained differencing, abstracting methods, and performing tokenization to transform source code into a learnable format. Abstraction involved converting identifiers and literals to idioms and abstract tokens to manage the large vocabulary within the dataset effectively.

Two primary datasets were prepared: one with small methods (BFP_small, ≤ 50 tokens) and another with medium methods (BFP_medium, 51-100 tokens). The training is performed using an RNN Encoder-Decoder with attention mechanisms to learn the conditional distribution of bug-fix translations.

Results and Evaluation

The paper exhibits impressive outcomes, with the models capable of perfectly predicting between 9% - 50% of the small BFPs and 3% - 28% of the medium BFPs, varying with beam width. With a beam size of 50, the model reached nearly half the small-method fixes. These results affirm that using NMT for bug-fixing tasks is viable. Furthermore, the syntactic correctness is significant, with over 82% of generated candidate patches being syntactically correct.

The model's ability to emulate a range of Abstract Syntax Tree (AST) operations used by developers further demonstrates its practical effectiveness. Specifically, the model covered between 28% - 64% of the AST operations for small methods and 16% - 52% for medium methods in the training dataset, enabling a robust potential bug coverage.

Implications and Future Directions

The findings indicate that automated program repair leverages the redundancy in code and idiomatic fixes learned from GitHub's rich repository of historical changes, leading to practical application scenarios where such models can assist developers by proposing candidate patches quickly and effectively.

The future direction for this research suggests several improvements. The researchers propose enhancing the granularity to class or package levels, optimizing the abstraction methodology to maintain meaningful context, and reducing the model's potential overfitting by better choosing hyperparameters. Extending the approach to various programming languages could further validate the methodology's robustness across different environments.

Concluding Remarks

The paper reflects a significant step in empirical software engineering, demonstrating how recent advancements in neural networks could be adapted for impactful automated program repairs. It offers a concrete example of applying NMT outside traditional NLP domains, expanding the capabilities of AI in practical real-world applications within software development and maintenance. These advancements can potentially support and enhance the efficiency of software developers in identifying and rectifying defects, thereby propelling further research in using machine learning for code-based tasks.