Supporting Peer Feedback at Scale
A Process Map
DOI:
https://doi.org/10.14742/apubs.2024.1140Keywords:
peer review, self-assessment, curriculum retention, engagement, community, student agencyAbstract
Schneider et al. (2017) conducted a systematic review of 38 meta-analyses investigating 105 variables linked to student performance. They reported peer assessment is the most influential factor in explaining students’ achievement. The benefits are even more significant when combined with self-assessment as students' meta-cognitive capacity is exercised as they calibrate their level of understanding and confidence to that of their peers (Power & Tanner, 2023). Assessing a peer digitally has been shown to be more effective than in-person settings (Li et al., 2020). Winstone & Boud (2022) recommend improving students’ learning by focusing on providing and receiving high-quality feedback rather than on grades.
Despite the potential of technology to help students assess each other, there is sparse research in commerce studies, let alone on large-scale subjects, where technology use is essential to leverage students’ learning and encourage interaction, which otherwise would be limited. Such a study can shed light on students' learning and engagement benefits. Technology can also aid in creating a structured learning environment, with students being required to complete multiple tasks throughout the semester and engage with the platform repetitively, reinforcing improved curriculum retention (Schwerter et al., 2022).
Our project involved implementing five tasks (a practice task and four assessed tasks) in a large-cohort first-year finance subject at a leading Australian university. The tasks contributed 15% to each student’s final mark and utilised the Feedback Fruits platform (integrated with Canvas LMS). For each task, students scanned their handwritten answers to two questions, uploaded them into the platform as a PDF file, completed self-assessment and reviewed a randomly assigned anonymous peer using a rubric designed to mimic the rubric used to mark final exam questions. We configured the system so that students had at least one week to upload their answers and another week to complete the self-assessment and peer review.
Statistical analysis of the Feedback Fruits analytics files, which include completion rates and performance, involved regression analysis and propensity score matching and indicates a significant positive relationship between platform engagement (measured by time spent on the platform, provided comments length and task performance) and final exam performance, even after controlling for a student’s finance aptitude. The study also involved an end-of-semester survey and focus groups to capture students’ perceptions. Students indicated the positive learning benefits they gained from reviewing their peers, being reviewed, and reflecting on their performance. Furthermore, we found a high correlation between the number of comments and the average number of words per comment students left themselves and those left for their peers. The innovation of our research is in the scale, continuous use of the platform throughout the semester with multiple access points per task and better exam preparedness by simulating exam conditions utilising rubrics and handwriting requirements.
This poster describes the project’s implementation process map, including task setup, timelines, and the support provided, which is the technical part of the bigger project. We review fundamental mechanisms to improve participation and the quality of the reviews. In addition, the poster presents the challenges faced and offers practical steps to overcome them.
Downloads
Published
Issue
Section
License
Copyright (c) 2024 Sean Pinder, Miriam Edwards, Assaf Dekel
This work is licensed under a Creative Commons Attribution 4.0 International License.