Comparing student learning experiences of in-text commentary and rubric-articulated feedback: strategies for formative assessment
Research output: Contribution to journal › Article
This study compares students' experiences of two types of criteria-based assessment: in text commentary and rubric-articulated feedback, in an assessment design combining the two feedback channels. The main aim is to use students' responses to shed light on how feedback strategies for formative assessment can be optimised. Following action research methodology, the study discusses key categories of student responses from three sources: reflective texts, a questionnaire, and interviews. Results show that different functions were attributed to the two feedback channels: in-text commentary to lower-order concerns related to language proficiency, and rubric-articulated feedback to higher-order concerns related to an overview of writing achievement. We argue that the different functions have the potential of creating a sufficiently balanced assessment design with the possibility of serving both short-term and continuous learning goals. On the other hand, some students found it difficult to navigate between the two feedback channels. The article therefore ends with a 'lessons learned' section where we list possible ways in which the current assessment design can be improved for optimal use of the synergy effects emanating from a combination of in-text commentary and rubric-articulated feedback for formative purposes.
|Research areas and keywords||
Subject classification (UKÄ) – MANDATORY
|Journal||Assessment & Evaluation in Higher Education|
|Publication status||Published - 2013|