TY - THES
T1 - Improving disaster response evaluations
T2 - Supporting advances in disaster risk management through the enhancement of response evaluation usefulness
AU - Beerens, Ralf Josef Johanna
N1 - Defence details
Date: 2021-09-03
Time: 10:15
Place: Lecture hall V:B, building V, John Ericssons väg 1, Faculty of Engineering LTH, Lund University, Lund. Zoom: https://lu-se.zoom.us/j/65999762322?pwd=dnJ0Q1pOdlVWdVk2MndEZjg1akpyUT09
External reviewer(s)
Name: Kruke, Björn Ivar
Title: Ass. Prof.
Affiliation: University of Stavanger, Norway.
---
PY - 2021
Y1 - 2021
N2 - Future disasters or crises are difficult to predict and therefore hard to prepare for. However, while a specific event might not have happened, it can be simulated in an exercise. The evaluation of performance during such an exercise can provide important information regarding the current state of preparedness, and used to improve the response to future events. For this to happen, evaluation products must be perceived as useful by the end user. Unfortunately, it appears that this is not the case. Both evaluations and their products are rarely used to their full extent or, in extreme cases, are regarded as paper-pushing exercises.The first part of this research characterises current evaluation practice, both in the scientific literature and in Dutch practice, based on a scoping study, document and content analyses, and expert judgements. The findings highlight that despite a recent increase in research attention, few studies focus on disaster management exercise evaluation. It is unclear whether current evaluations achieve their purpose, or how they contribute to disaster preparedness. Both theory and practice tend to view, and present evaluations in isolation. This limited focus creates a fragmented field that lacks coherence and depth. Furthermore, most evaluation documentation fails to justify or discuss the rational underlying the selected methods, and their link to the overall purpose or context of the exercise. The process of collecting and analysing contextual, evidence-based data, and using it to reach conclusions and make recommendations lacks methodological transparency and rigour. Consequently, professionals lack reliable guidance when designing evaluations.Therefore, the second part of this research aimed to gain an insights into what make evaluations useful, and suggest improvements. In particular, it highlights the values associated with the methodology used to record and present evaluation outcomes to end users. The notion of an ‘evaluation description’ is introduced to support the identification of four components that are assumed to influence the usefulness of an evaluation: its purpose, object description, analysis and conclusion. Survey experiments identified that how these elements – notably, the analysis and/ or conclusions – are documented significantly influences the usefulness of the product. Furthermore, different components are more useful depending on the purpose of the report (for learning or accountability). Crisis management professionals expect the analysis to go beyond the object of the evaluation, and focus on the broader context. They expect a rigorous evaluation to provide them with evidence-based judgements that deliver actionable conclusions and support future learning.Overall, this research shows that the design and execution of evaluations should provide systematic, rigorous, evidence-based and actionable outcomes. It suggests some ways to manage both the process and the products of an evaluation to improve its usefulness. Finally, it underlines that it is not the evaluation itself that leads to improvement, but its use. Evaluation should, therefore, be seen as a means to an end.
AB - Future disasters or crises are difficult to predict and therefore hard to prepare for. However, while a specific event might not have happened, it can be simulated in an exercise. The evaluation of performance during such an exercise can provide important information regarding the current state of preparedness, and used to improve the response to future events. For this to happen, evaluation products must be perceived as useful by the end user. Unfortunately, it appears that this is not the case. Both evaluations and their products are rarely used to their full extent or, in extreme cases, are regarded as paper-pushing exercises.The first part of this research characterises current evaluation practice, both in the scientific literature and in Dutch practice, based on a scoping study, document and content analyses, and expert judgements. The findings highlight that despite a recent increase in research attention, few studies focus on disaster management exercise evaluation. It is unclear whether current evaluations achieve their purpose, or how they contribute to disaster preparedness. Both theory and practice tend to view, and present evaluations in isolation. This limited focus creates a fragmented field that lacks coherence and depth. Furthermore, most evaluation documentation fails to justify or discuss the rational underlying the selected methods, and their link to the overall purpose or context of the exercise. The process of collecting and analysing contextual, evidence-based data, and using it to reach conclusions and make recommendations lacks methodological transparency and rigour. Consequently, professionals lack reliable guidance when designing evaluations.Therefore, the second part of this research aimed to gain an insights into what make evaluations useful, and suggest improvements. In particular, it highlights the values associated with the methodology used to record and present evaluation outcomes to end users. The notion of an ‘evaluation description’ is introduced to support the identification of four components that are assumed to influence the usefulness of an evaluation: its purpose, object description, analysis and conclusion. Survey experiments identified that how these elements – notably, the analysis and/ or conclusions – are documented significantly influences the usefulness of the product. Furthermore, different components are more useful depending on the purpose of the report (for learning or accountability). Crisis management professionals expect the analysis to go beyond the object of the evaluation, and focus on the broader context. They expect a rigorous evaluation to provide them with evidence-based judgements that deliver actionable conclusions and support future learning.Overall, this research shows that the design and execution of evaluations should provide systematic, rigorous, evidence-based and actionable outcomes. It suggests some ways to manage both the process and the products of an evaluation to improve its usefulness. Finally, it underlines that it is not the evaluation itself that leads to improvement, but its use. Evaluation should, therefore, be seen as a means to an end.
KW - crisis
KW - disaster
KW - emergency
KW - disaster risk management (DRM)
KW - preparedness
KW - exercise
KW - simulation
KW - response
KW - performance
KW - evaluation
KW - usefullness
KW - design
KW - The Netherlands
M3 - Doctoral Thesis (compilation)
SN - 978-91-7895-923-5
PB - Division of Risk Management and Societal Safety, Faculty of Engineering, Lund University
CY - Lund
ER -