A curated benchmark collection of python systems for empirical studies on software engineering

Matteo Orrú, Ewan Tempero, Michele Marchesi, Roberto Tonelli, Giuseppe Destefanis

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    9 Citations (Scopus)

    Abstract

    The aim of this paper is to present a dataset of metrics as- sociated to the first release of a curated collection of Python software systems. We describe the dataset along with the adopted criteria and the issues we faced while building such corpus. This dataset can enhance the reliability of empirical studies, enabling their reproducibility, reducing their cost, and it can foster further research on Python software.

    Original languageEnglish
    Title of host publication11th International Conference on Predictive Models and Data Analytics in Software Engineering, PROMISE 2015
    PublisherACM Press
    Volume2015-October
    ISBN (Electronic)9781450337151
    DOIs
    Publication statusPublished - 21 Oct 2015
    Event11th International Conference on Predictive Models and Data Analytics in Software Engineering, PROMISE 2015 - Beijing, China
    Duration: 21 Oct 2015 → …

    Conference

    Conference11th International Conference on Predictive Models and Data Analytics in Software Engineering, PROMISE 2015
    Country/TerritoryChina
    CityBeijing
    Period21/10/15 → …

    Keywords

    • Curated Code Collection
    • Empirical Studies
    • Python

    Fingerprint

    Dive into the research topics of 'A curated benchmark collection of python systems for empirical studies on software engineering'. Together they form a unique fingerprint.

    Cite this