Pre-trained Model-based Automated Software Vulnerability Repair: How Far are We? (2308.12533v2)
Abstract: Various approaches are proposed to help under-resourced security researchers to detect and analyze software vulnerabilities. It is still incredibly time-consuming and labor-intensive for security researchers to fix vulnerabilities. The time lag between reporting and fixing a vulnerability causes software systems to suffer from significant exposure to possible attacks. Recently, some techniques have proposed applying pre-trained models to fix security vulnerabilities and have proved their success in improving repair accuracy. However, the effectiveness of existing pre-trained models has not been systematically analyzed, and little is known about their advantages and disadvantages. To bridge this gap, we perform the first extensive study on applying various pre-trained models to vulnerability repair. The results show that studied pre-trained models consistently outperform the state-of-the-art technique VRepair with a prediction accuracy of 32.94%~44.96%. We also investigate the impact of major phases in the vulnerability repair workflow. Surprisingly, a simplistic approach adopting transfer learning improves the prediction accuracy of pre-trained models by 9.40% on average. Besides, we provide additional discussion to illustrate the capacity and limitations of pre-trained models. Finally, we further pinpoint various practical guidelines for advancing pre-trained model-based vulnerability repair. Our study highlights the promising future of adopting pre-trained models to patch real-world vulnerabilities.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.