Abstract
Many studies have been carried out to predict the presence of software code defects using static code metrics. Such studies typically report how a classifier performs with real world data, but usually no analysis of the predictions is carried out. An analysis of this kind may be worthwhile as it can illuminate the motivation behind the predictions and the severity of the misclassifications. This investigation involves a manual analysis of the predictions made by Support Vector Machine classifiers using data from the NASA Metrics Data Program repository. The findings show that the predictions are generally well motivated and that the classifiers were, on average, more 'confident' in the predictions they made which were correct.
Original language | English |
---|---|
Title of host publication | IEEE International Joint Conference on Neural Networks (IJCNN) |
Publisher | Institute of Electrical and Electronics Engineers (IEEE) |
Pages | 1-7 |
Number of pages | 7 |
ISBN (Print) | 978-1-4244-6916-1 |
DOIs | |
Publication status | Published - 2010 |