Abstract
In software engineering research, experiments are conducted to evaluate new methods or techniques. The experimentation as such is beginning to mature, but little effort is spent on learning across different studies, except for a few meta-analyses. Meta-analysis can be applied to a set of experiments with the same design. This paper discusses learning across a set of experimental studies on fault detection techniques, conducted in very similar environments, although with different hypotheses. Four experiments have been conducted applying Usage-Based Reading (UBR), hence establishing a point of reference for other techniques. In the different experiments, UBR is compared to Checklist-Based Reading (CBR), two variants of UBR and Usage-Based Testing (UBT). We present an approach to analysis across different experimental studies, and identify a set of issues for discussion on whether the approach is feasible for further use in empirical software engineering.
Original language | English |
---|---|
Title of host publication | 2nd Workshop in Workshop Series on Empirical Software Engineering |
Pages | 133-142 |
Publication status | Published - 2003 |
Subject classification (UKÄ)
- Computer Science