Software defect prediction using static code metrics underestimates defect-proneness

David Gray, David Bowes, N. Davey, Yi Sun, B. Christianson

Research output: Chapter in Book/Report/Conference proceedingConference contribution

18 Citations (Scopus)
108 Downloads (Pure)

Abstract

Many studies have been carried out to predict the presence of software code defects using static code metrics. Such studies typically report how a classifier performs with real world data, but usually no analysis of the predictions is carried out. An analysis of this kind may be worthwhile as it can illuminate the motivation behind the predictions and the severity of the misclassifications. This investigation involves a manual analysis of the predictions made by Support Vector Machine classifiers using data from the NASA Metrics Data Program repository. The findings show that the predictions are generally well motivated and that the classifiers were, on average, more 'confident' in the predictions they made which were correct.
Original languageEnglish
Title of host publicationIEEE International Joint Conference on Neural Networks (IJCNN)
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Pages1-7
Number of pages7
ISBN (Print)978-1-4244-6916-1
DOIs
Publication statusPublished - 2010

Fingerprint

Dive into the research topics of 'Software defect prediction using static code metrics underestimates defect-proneness'. Together they form a unique fingerprint.

Cite this