Emergent Mind

Abstract

Context: In the realm of software development, maintaining high software quality is a persistent challenge. However, this challenge is often impeded by the lack of comprehensive understanding of how specific code modifications influence quality metrics. Objective: This study ventures to bridge this gap through an approach that aspires to assess and interpret the impact of code modifications. The underlying hypothesis posits that code modifications inducing similar changes in software quality metrics can be grouped into distinct clusters, which can be effectively described using an AI language model, thus providing a simple understanding of code changes and their quality implications. Method: To validate this hypothesis, we built and analyzed a dataset from popular GitHub repositories, segmented into individual code modifications. Each project was evaluated against software quality metrics pre and post-application. Machine learning techniques were utilized to cluster these modifications based on the induced changes in the metrics. Simultaneously, an AI language model was employed to generate descriptions of each modification's function. Results: The results reveal distinct clusters of code modifications, each accompanied by a concise description, revealing their collective impact on software quality metrics. Conclusions: The findings suggest that this research is a significant step towards a comprehensive understanding of the complex relationship between code changes and software quality, which has the potential to transform software maintenance strategies and enable the development of more accurate quality prediction models.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.