University of Hertfordshire

By the same authors

What is the impact of imbalance on software defect prediction performance?

Research output: Chapter in Book/Report/Conference proceedingConference contribution

View graph of relations
Original languageEnglish
Title of host publicationPROMISE '15
Subtitle of host publicationProcs of the 11th Int Conf on Predictive Models and Data Analytics in Software Engineering
PublisherACM Press
ISBN (Print)9781450337151
DOIs
Publication statusPublished - 21 Oct 2015
Event11th International Conference on Predictive Models and Data Analytics in Software Engineering, PROMISE 2015 - Beijing, China
Duration: 21 Oct 2015 → …

Conference

Conference11th International Conference on Predictive Models and Data Analytics in Software Engineering, PROMISE 2015
CountryChina
CityBeijing
Period21/10/15 → …

Abstract

Software defect prediction performance varies over a large range. Menzies suggested there is a ceiling effect of 80% Recall [8]. Most of the data sets used are highly imbalanced. This paper asks, what is the empirical effect of using different datasets with varying levels of imbalance on predictive performance? We use data synthesised by a previous meta-analysis of 600 fault prediction models and their results. Four model evaluation measures (the Mathews Correlation Coeficient (MCC), F-Measure, Precision and Re- call ) are compared to the corresponding data imbalance ratio. When the data are imbalanced, the predictive performance of software defect prediction studies is low. As the data become more balanced, the predictive performance of prediction models increases, from an average MCC of 0.15, until the minority class makes up 20% of the instances in the dataset, where the MCC reaches an average value of about 0.34. As the proportion of the minority class increases above 20%, the predictive performance does not significantly increase. Using datasets with more than 20% of the instances being defective has not had a significant impact on the predictive performance when using MCC. We conclude that comparing the results of defect prediction studies should take into account the imbalance of the data.

ID: 9545269