Abstract
Evaluation methods are vital tools to support learning and ensure quality in crisis response management (CRM), they provide systematic feedback in situations where it is difficult to understand how separate factors affect the outcome of an event. In this paper, the scoping study methodology is used to examine the range and nature of methods available for evaluating CRM performance. The methods are systematically charted into classes to highlight significant research gaps and examine current trends. The analysis reveals that evaluation methods are not available for all relevant circumstances, that there are no attempts to create empirical baselines, and that the link between effectiveness and a method rarely is brought up. The paper also highlights how doctrinal knowledge may be used to increase interrater reliability, and issues that are complicating statistical validations in this field.
| Original language | English |
|---|---|
| Pages (from-to) | 107-143 |
| Number of pages | 37 |
| Journal | International Journal of Emergency Management |
| Volume | 18 |
| Issue number | 2 |
| DOIs | |
| Publication status | Published - 2023 |
Subject classification (UKÄ)
- Information Systems
- Information Systems, Social aspects (including Human Aspects of ICT)
Free keywords
- Analysis
- C2
- Command and control
- Crisis management
- Crisis response management
- CRM
- Evaluation methods
- Human performance
- Literature review
- Performance evaluations
- Quantitative evaluations
- Scoping study