Authors' Reply to “Comments on 'Researcher Bias: The Use of Machine Learning in Software Defect Prediction

Martin Shepperd, Tracy Hall, David Bowes

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)
12 Downloads (Pure)

Abstract

In 2014 we published a meta-analysis of software defect prediction studies [1]. This suggested that the most important factor in determining results was Research Group i.e., who conducts the experiment is more important than the classifier algorithms being investigated. A recent re-analysis [2] sought to argue that the effect is less strong than originally claimed since there is a relationship between Research Group and Dataset. In this response we show (i) the re-analysis is based on a small (21%) subset of our original data, (ii) using the same re-analysis approach with a larger subset shows that Research Group is more important than type of Classifier and (iii) however the data are analysed there is compelling evidence that who conducts the research has an effect on the results. This means that the problem of researcher bias remains. Addressing it should be seen as a matter of priority amongst those of us who conduct and publish experiments comparing the performance of competing software defect prediction systems.
Original languageEnglish
Pages (from-to)1129-1131
Number of pages3
JournalIEEE Transactions on Software Engineering
Volume44
Issue number11
Early online date24 Jul 2017
DOIs
Publication statusPublished - Nov 2018

Keywords

  • Analysis of variance
  • Analytical models
  • Data models
  • defect prediction
  • Measurement
  • NASA
  • Predictive models
  • researcher bias
  • Software
  • Software quality assurance

Fingerprint

Dive into the research topics of 'Authors' Reply to “Comments on 'Researcher Bias: The Use of Machine Learning in Software Defect Prediction'. Together they form a unique fingerprint.

Cite this