“Summarise.” “Elaborate.” “Try Again”

Exploring the effect of feedback literacy on AI-enhanced essay writing

Authors

  • Brittany Hawkins University of Queensland
  • Jason Lodge University of Queensland
  • Daniel Taylor-Griffiths University of Queensland
  • David Carless University of Hong Kong

Keywords:

Generative AI, feedback literacy, self-regulated learning, evaluative judgement

Abstract

Generative artificial intelligence (AI) is transforming the way students learn and complete assessment. Conservative estimates suggest that more than 50% of university students are using AI in their studies (Higher Education Policy Institute, 2024). In particular, students have reported the benefits of using AI for real-time, personalised feedback (Chan & Hu, 2023).

AI like ChatGPT are large language models, and as such their output should not be confused with knowledge on any given topic. As students are completing more of their studies off campus and without direct supervision (Lodge et al., 2023), feedback literacy - the ability to seek out, evaluate, and apply feedback to a task or process (Carless & Boud, 2018) - is critical. This study employed a self-regulated learning (SRL) framework to investigate how students are using AI for feedback (Pintrich, 2000).

In individual sessions, psychology students completed a screen recorded, 25-minute essay, using AI to enhance their work. Following a questionnaire capturing AI experience and trust, perceptions of task difficulty, and feedback literacy behaviours, participants were asked to discuss how they used AI to complete the task while watching the essay screen recording. Essays were graded blindly and interview recordings were transcribed. While this study was predominantly exploratory, we also expected better essay performance to be associated with greater feedback literacy skills.

A multiple regression found feedback literacy to be a significant predictor of essay performance (? = .46, t(25) = 2.56, p = .017). A thematic analysis (Braun & Clarke, 2006) of interview transcriptions identified four themes (and 10 subthemes) of AI use: feed forward (initial requests to AI), feedback (requesting AI assess own work), feedback evaluation (evaluating AI output), and AI avoidance (deliberately not using AI). Less than 20% of participants explicitly asked AI for essay feedback. Most feedback requests were instead for more “line level” language improvements. Upon receiving feedback from AI, all but one participant evaluated the accuracy or usefulness of AI content at least once. Requests to “expand,” “summarise,” “elaborate,” and “try again” directly enacted the user’s evaluation upon the AI output. Interestingly, half the participants also expressed active attempts to avoid AI. Many cited concerns that they “could just accidentally, subconsciously, just write it [the essay] the same” as AI.

These findings are consistent with existing research demonstrating the positive effect of feedback on academic outcomes (Wisniewski et al., 2020), and the conceptualisation of feedback literacy as a sophisticated toolset required for feedback evaluation (Carless & Boud, 2018). Generative AI created a context of co-regulation between student and machine. Participants used generative AI to: outsource cognitively intense activities, motivate task completion by corroborating understanding, and enable and encourage help-seeking behaviour.

The results of this study highlight the need for educational institutions to foster student feedback literacy skills that encourage thoughtful and carefully considered use of generative AI tools. Without SRL skills grounded in self-efficacy and a motivation to learn, AI operated more like a student than a student tool.

Downloads

Published

2024-11-23

Issue

Section

ASCILITE Conference - Posters