Emergent Mind

Abstract

Cross-project defect prediction (CPDP) plays an important role in estimating the most likely defect-prone software components, especially for new or inactive projects. To the best of our knowledge, few prior studies provide explicit guidelines on how to select suitable training data of quality from a large number of public software repositories. In this paper, we have proposed a training data simplification method for practical CPDP in consideration of multiple levels of granularity and filtering strategies for data sets. In addition, we have also provided quantitative evidence on the selection of a suitable filter in terms of defect-proneness ratio. Based on an empirical study on 34 releases of 10 open-source projects, we have elaborately compared the prediction performance of different defect predictors built with five well-known classifiers using training data simplified at different levels of granularity and with two popular filters. The results indicate that when using the multi-granularity simplification method with an appropriate filter, the prediction models based on Naive Bayes can achieve fairly good performance and outperform the benchmark method.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.