Projects per year
Abstract
Bug benchmarks are used in development and evaluation of debugging approaches, e.g. fault localization and automated repair. Quantitative performance comparison of different debugging approaches is only possible when they have been evaluated on the same dataset or benchmark. However, benchmarks are often specialized towards usage for certain debugging approaches in their contained data, metrics, and artifacts. Such benchmarks cannot be easily used on debugging approaches outside their scope as such approach may rely on specific data such as bug reports or code metrics that are not included in the dataset. Furthermore, benchmarks vary in their size w.r.t. the number of subject programs and the size of the individual subject programs. For these reasons, we have performed a systematic literature review where we have identified 73 benchmarks that can be used to evaluate debugging approaches. We compare the different benchmarks w.r.t. their size and the provided information such as bug reports, contained test cases, and other code metrics. This comparison is intended to help researchers to quickly identify all suitable benchmarks for evaluating their specific debugging approaches. Furthermore, we discuss reoccurring issues and challenges in selection, acquisition, and usage of such bug benchmarks, i.e., data availability, data quality, duplicated content, data formats, reproducibility, and extensibility. Editor's note: Open Science material was validated by the Journal of Systems and Software Open Science Board.
Original language | English |
---|---|
Article number | 111423 |
Number of pages | 17 |
Journal | Journal of Systems and Software |
Volume | 192 |
DOIs | |
Publication status | Published - Oct 2022 |
Keywords
- Debugging
- Benchmark
- Fault localization
- Automated repair
- Automatic repair
ASJC Scopus subject areas
- Software
- Information Systems
- Hardware and Architecture
Fields of Expertise
- Information, Communication & Computing
Treatment code (Nähere Zuordnung)
- Basic - Fundamental (Grundlagenforschung)
Fingerprint
Dive into the research topics of 'A systematic literature review on benchmarks for evaluating debugging approaches'. Together they form a unique fingerprint.Projects
- 1 Finished
-
FWF - AMADEUS - Automated Debugging in Use
Hofer, B. G. (Co-Investigator (CoI))
1/01/20 → 30/04/24
Project: Research project