Abstract
Background: There has been much discussion amongst automated software defect prediction researchers regarding use of the precision and false positive rate classifier performance metrics.
Aim: To demonstrate and explain why failing to report precision when using data with highly imbalanced class distributions may provide an overly optimistic view of classifier performance.
Method: Well documented examples of how dependent class distribution affects the suitability of performance measures.
Conclusions: When using data where the minority class represents less than around 5 to 10 percent of data points in total, failing to report precision may be a critical mistake. Furthermore, deriving the precision values omitted from studies can reveal valuable insight into true classifier performance
Aim: To demonstrate and explain why failing to report precision when using data with highly imbalanced class distributions may provide an overly optimistic view of classifier performance.
Method: Well documented examples of how dependent class distribution affects the suitability of performance measures.
Conclusions: When using data where the minority class represents less than around 5 to 10 percent of data points in total, failing to report precision may be a critical mistake. Furthermore, deriving the precision values omitted from studies can reveal valuable insight into true classifier performance
Original language | English |
---|---|
Pages (from-to) | 129-133 |
Journal | IET Seminar Digest |
Issue number | 1 |
DOIs | |
Publication status | Published - 2011 |
Event | Proceedings of the 15th International Conference on Evaluation and Assessment in Software Engineering - Durham, United Kingdom Duration: 11 Apr 2011 → 12 Apr 2011 |