Abstract
Background: The NASA Metrics Data Program data sets have been heavily used in software defect prediction experiments.
Aim: To demonstrate and explain why these data sets require significant pre-processing in order to be suitable for defect prediction.
Method: A meticulously documented data cleansing process involving all 13 of the original NASA data sets.
Results: Post our novel data cleansing process; each of the data sets had between 6 to 90 percent less of their original number of recorded values.
Conclusions:
One: Researchers need to analyse the data that forms the basis of their findings in the context of how it will be used.
Two: Defect prediction data sets could benefit from lower level code metrics in addition to those more commonly used, as these will help to distinguish modules, reducing the likelihood of repeated data points.
Three: The bulk of defect prediction experiments based on the NASA Metrics Data Program data sets may have led to erroneous findings. This is mainly due to repeated data points potentially causing substantial amounts of training and testing data to be identical.
Aim: To demonstrate and explain why these data sets require significant pre-processing in order to be suitable for defect prediction.
Method: A meticulously documented data cleansing process involving all 13 of the original NASA data sets.
Results: Post our novel data cleansing process; each of the data sets had between 6 to 90 percent less of their original number of recorded values.
Conclusions:
One: Researchers need to analyse the data that forms the basis of their findings in the context of how it will be used.
Two: Defect prediction data sets could benefit from lower level code metrics in addition to those more commonly used, as these will help to distinguish modules, reducing the likelihood of repeated data points.
Three: The bulk of defect prediction experiments based on the NASA Metrics Data Program data sets may have led to erroneous findings. This is mainly due to repeated data points potentially causing substantial amounts of training and testing data to be identical.
Original language | English |
---|---|
Title of host publication | Procs 15th Annual Conference on Evaluation & Assessment in Software Engineering (EASE 2011) |
Publisher | Institution of Engineering and Technology (IET) |
Pages | 96-103 |
ISBN (Print) | 978-1-84919-509-6 |
DOIs | |
Publication status | Published - 2011 |
Event | Proceedings of the 15th International Conference on Evaluation and Assessment in Software Engineering - Durham, United Kingdom Duration: 11 Apr 2011 → 12 Apr 2011 |
Conference
Conference | Proceedings of the 15th International Conference on Evaluation and Assessment in Software Engineering |
---|---|
Country/Territory | United Kingdom |
City | Durham |
Period | 11/04/11 → 12/04/11 |