DConfusion: A technique to allow cross study performance evaluation of fault prediction studies.

David Bowes, Tracy Hall, David Gray

Research output: Contribution to journalArticlepeer-review

16 Citations (Scopus)

Abstract

There are many hundreds of fault prediction models published in the literature. The predictive performance of these models is often reported using a variety of different measures. Most performance measures are not directly comparable. This lack of comparability means that it is often difficult to evaluate the performance of one model against another. Our aim is to present an approach that allows other researchers and practitioners to transform many performance measures back into a confusion matrix. Once performance is expressed in a confusion matrix alternative preferred performance measures can then be derived. Our approach has enabled us to compare the performance of 600 models published in 42 studies. We demonstrate the application of our approach on 8 case studies, and discuss the advantages and implications of doing this.
Original languageEnglish
Pages (from-to)287-13
JournalAutomated Software Engineering
Volume21
Issue number2
Early online date31 Jul 2013
DOIs
Publication statusPublished - Apr 2014

Keywords

  • fault, confusion matrix, machine learning

Fingerprint

Dive into the research topics of 'DConfusion: A technique to allow cross study performance evaluation of fault prediction studies.'. Together they form a unique fingerprint.

Cite this