DescriptionEvaluation and Assessment in Software Engineering is the name of our conference. We discuss and advance evaluation and assessment methods, such as experiments, case studies, data mining studies, and systematic literature reviews. But what are the study objects? What do we evaluate and assess?
Empirical software engineering evolved partly as a reaction towards “novelty” being the primary quality criterion for research. By focusing on the evaluation and assessment, we tried to take a more comprehensive perspective on what advances software engineering. However, the pendulum has swung towards the other end, and the design of solutions have been overshadowed by the evaluation part. The design science paradigm may bring the balance back.
To advance software engineering, we as a community try to understand current practices, their strength and weaknesses. We also propose new tools, practices etc. to make software engineering more efficient, predictable or effective. Implicitly or explicitly, we apply a cycle of problem understanding, solution design, and evaluation. This is at the core of the design science paradigm. Further, expressing the design knowledge as actionable guidelines in the form of technological rules, aims both at communicating research outcomes to practitioners and at functioning as a theoretical basis for subsequent research.
In this talk, I define my conceptualization of design science, present examples of software engineering research, through the lens of design science, and outline how we as empirical software engineering researchers and practitioners may advance the field within the design science paradigm.
|Period||2019 Apr 17|
|Event title||23rd Evaluation and Assessment in Software Engineering Conference, EASE 2019|
|Sponsor||IT University of Copenhagen|
|Degree of Recognition||International|
UKÄ subject classification
- Software Engineering