AI-powered peer review process

An approach to enhance computer science students’ engagement with code review in industry-based subjects

Authors

  • Eduardo Oliveira University of Melbourne
  • Shannon Rios University of Melbourne
  • Zhuoxuan Jiang University of Melbourne

DOI:

https://doi.org/10.14742/apubs.2023.482

Keywords:

peer review, code review, automated review, feedback, self-efficacy

Abstract

Code review is a common type of peer review in Computer Science (CS) education. It’s a peer review process that involves CS students other than the original author examining source code and is widely acknowledged as an effective method for reducing software errors and enhancing the overall quality of software projects. While code review is an essential skill for CS students, they often feel uncomfortable to share their work or to provide feedback to peers due to concerns related to coding experience, validity, reliability, bias, and fairness. An automated code review process could offer students the potential to access timely, consistent, and independent feedback about their coding artifacts. We investigated the use of generative Artificial Intelligence (genAI) to automate a peer review process to enhance CS students’ engagement with code review in an industry-based subject in the School of Computing and Information System, University of Melbourne. Moreover, we evaluated the effectiveness of genAI at performing checklist-based assessments of code. A total of 80 CS students performed over 36 reviews in two different weeks. We found our genAI-powered reviewing process significantly increased students’ engagement in code review and, could also identify a larger number of code issues in short times, leading to more fixes. These results suggest that our approach could be successfully used in code reviews, potentially helping to address issues related to peer review in higher education settings.

Downloads

Published

2023-11-28