Comparing the performance of fault prediction models which report multiple performance measures: recomputing the confusion matrix

David Bowes, Tracy Hall, David Gray

Research output: Chapter in Book/Report/Conference proceedingConference contribution

19 Citations (Scopus)
261 Downloads (Pure)

Abstract

There are many hundreds of fault prediction models published in the literature. The predictive performance of these models is often reported using a variety of different measures. Most performance measures are not directly comparable.
This lack of comparability means that it is often difficult to evaluate the performance of one model against another. Our aim is to present an approach that allows other researchers and practitioners to transform many performance
measures of categorical studies back into a confusion matrix.
Once performance is expressed in a confusion matrix alternative preferred performance measures can then be derived. Our approach has enabled us to compare the performance of 600 models published in 42 studies. We demonstrate the application of our approach on several case studies, and discuss
the advantages and implications of doing this.
Original languageEnglish
Title of host publicationProcs of the 8th Int Conf on Predictive Models in Software Engineering
Subtitle of host publicationPROMISE'12
Place of PublicationNew York, NY, USA
PublisherACM Press
Pages109-118
Number of pages10
ISBN (Print)978-1-4503-1241-7
DOIs
Publication statusPublished - 2012

Publication series

NamePROMISE '12
PublisherACM

Fingerprint

Dive into the research topics of 'Comparing the performance of fault prediction models which report multiple performance measures: recomputing the confusion matrix'. Together they form a unique fingerprint.

Cite this