Developing feedback literacy capabilities through an AI automated feedback tool

The use of Artificial Intelligence (AI) in teaching has mainly focused on providing students with immediate, and more, feedback while reducing teachers’ workload. In this paper, we propose a shift in thinking about the role AI can play in developing students’ feedback literacy. Through the reflection of a University wide T&L pilot project ran with an AI tool for automated feedback, we frame a discourse on the use of AI for teaching and learning to promote and further develop students’ feedback literacy capabilities, including the dispositions required for a meaningful partnership with their teachers in relation to feedback. This paper reports the interactions of higher education students with an AI automated feedback tool and discusses its affordances in relation to the development of students’ feedback literacy capabilities. Implications for further refinement of this tool are also presented.


Introduction
Feedback has the potential to have a great impact on student achievements of learning outcomes, however, in reality its impact is highly variable (Hattie, 2008). Furthermore, feedback is not just related to studies and lifelong learning, but it is also a core capability within the workplace (Carless and Boud, 2018). To realise the potential of students becoming future ready for the world of work, students' feedback literacy needs to be developed. Students' feedback literacy is underpinned by the understanding of feedback, the appreciation of feedback, and the ability to take action on feedback (Carless and Boud, 2018). In fact, effective feedback relies more on students actions and outputs in response to the feedback that on the quality and timing of the feedback. Carless (2020) argues that developing students' feedback literacy should become a core element of the curriculum, with it being embedded systematically in the curriculum through enabling activities and metadialogues about feedback processes between teachers and students. This would in fact be ideal, however, it would add to the already overcrowded curriculum and teacher workload and depend heavily on teachers' feedba ck literacy.
To counterbalance this, in the recent years, there has been an increasing interest in the use of Artificial Intelligence (AI) for teaching and learning within the higher education sector (Khosravi et al., 2022, Markauskaite et al., 2022, Popenici & Kerr, 2017. In particular, the discourse on applying an AI for providing automated feedback on written assignments has been on the rise, suggesting benefits for improving student writing skills (Conijn et al., 2020). AI-powered, automated feedback tools that address academic writing could support the development of students' feedback literacy. This appropriate use of AI for automated feedback aligns with Molloy et al. (2020) suggestion that, the development of students' feedback literacy can't displace other activities but should be embedded as part of existing learning activities. It also addresses the matters of teachers' workload by augmenting the low-level outcome feedback so that teachers can focus on providing more high order thinking skills. In addition, something an AI feedback tool can easily do, is position feedback as a student-centered activity rather than an attribute of teaching. It also removes the judgement value and the power structure from the feedback interaction, potentially promoting students to be less emotional and more critical, thus better placed to enhance evaluative judgement (Tai et al., 2018) of the feedback in relation to their own work.
This paper describes the current AI powered automated feedback tool and presents Molloy et al. (2020) learning centred framework for feedback literacy, as the framework that will be used to inform the tool's iterative design. Observations form the pilots undertaken at a large Australian University, which have elucidated the affordances of the current tool in relation to improving students' feedback literacy capabilities, are presented and the implications for the next iteration of the tool are also disc ussed.

Background
As part of the University wide strategic teaching and learning project, a large Australian university partnered with FeedbackFruits through a 'DoTank' innovation project. Feedback Fruits is an edutech company based in the Netherlands that develops educational tools that integrate to any LMS. This DoTank project involves "codesigning" processes to further design and develop innovative educational technologies for teaching and learning. An AI powered automated feedback tool has been developed by FeedbackFruits through collaboration and piloting of the tool with other international universities. The aim of this tool is to assist students with timely and actionable feedback on academic writing and assist teachers' workload to focus on feedba ck related to higher order thinking skills. Our University embarked on piloting this tool in 2021.

Feedback Literacy
Feedback literacy refers to the understandings, capacities and dispositions required to benefit from feedback opportunities (Carless and Boud, 2018). In other words, for students to be feedback literate they need to have an understanding of feedback and how it works, they need to have the ability to plan and act in response to feedback received, and they need to value and be acceptin g of feedback processes. Furthermore, Molloy et al. (2020) elaborate on the specific understandings, capacities, and dispositions of feedback literate student, through the validation of a learning-centred framework for feedback literacy. This framework groups student feedback literacy characteristics into seven feedback literacy dimensions.
1. Commits to feedback as improvement. 2. Appreciates feedback as an active process. 3. Elicits information to improve learning. 4. Processes feedback information. 5. Acknowledges and works with emotions. 6. Acknowledges feedback as a reciprocal process. 7. Enacts outcomes of processing of feedback information: incorporates feedback into learning goals.
These seven dimensions of feedback are critical in preparing graduates for the future of world of work and can be addressed and developed through various learning activities designed by teachers.

AI tools for automated feedback
The term artificial intelligence (AI) is sometimes equated with artificial consciousness or sentient robots. However, these are still theoretical forms of AI. The AI in existence today, also called narrow AI, is focused on performing specific tasks. Amazon's Alexa and autonomous vehicles are some examples of narrow AI. Narrow AI (or just AI) process data to identify it and classify it based on their programming or training. The AI field encompasses multiple subfields such as machine learning, natural language processing, deep learning, and intelligent agents (Tang et al., 2020). Common to all these is an outcome that is predictive, or in the form of a probability, as opposed to being deterministic. In other words, AI algorithms result in systems which make predictions or classifications based on input data. This is different from conventional programming, where programs are provided with input data and generate an output based on programming logic that relies upon calculations within some information boundaries.
Among the AI tools that support writing, are the automated writing evaluation (AWE) tools. The AWEs identify specific characteristics of a text, supporting editing and drafting stages. AWEs can focus on micro text level only (words, phrases, or sentences), macro level only (paragraphs, whole text) or both (Strobl et al., 2019). Gramma rly is a well-known example of AWE (Khoshnevisan, 2019). AWEs that focus on micro text level only are used at the editing stage of writing while those that focus on macro level can be used at the drafting stages.
In some of these AWEs the parameters are pre-set, such as in Grammarly, in others the parameters can be set by either teachers or students. The responses to the informatio n collected also results in different responses or forms of feedback depending on the purpose of the tool and the features being detected. Grammarly, for example, being focused on copy-editing support, suggests to the user changes that can be accepted individually or globally.

FeedbackFruits automated feedback on academic writing AI powered tool
FeedbackFruits have developed an AI tool that enables automatic feedback on academic writing. The automated feedback tool developed by FeedbackFruits is a form of AWE. As mentioned, AWEs can focus on micro text level only (words, phrases, or sentences), macro level only (paragraphs, whole text) or both. The automated feedback tool developed by FeedbackFruits currently focuses on micro level features of texts only, supporting the copy-editing stage of writing. Micro level AWEs detect specific parameters in the text such a s sentence length, punctuation rules, grammar, or text structure. In the current version of the FeedbackFruits automated feedback tool, the teacher sets the parameters, but the student uses the tool independently. The tool highlights errors and either describes mistake or directs the student towards additional information or learning resources and the student receives and actions the feedback without teacher involvement. The current aim of this tool is to provide timely and actionable feedback on specific a spects of writing skills for students and to reduce academic workload by automating low level feedback, allowing teaching staff to focus on providing feedback on more complex, or higher order thinking aspects of the tasks. The provision of immediate feedba ck is also expected to increase the use of specific feedback by students, stimulating active learning and resulting in higher quality of written submissions.

Methodology
Design-based research (Reeves, 2006;Akker, 1999) approaches are commonly used for the research on educational technology. These approaches involve a successive approximation through iterative integration of design principles with technological affordances, and testing and refining the prototype or tool (Akker, 1999). In fact, Akker (1999) remarks that direct application of theory is not sufficient to solve these complicated problems. These approaches ideally include the collaboration with practitioners in real contexts. In the pilot project described in this paper, the University's academics, and students, across all Faculties, are involved in piloting and evaluating the prototypes.
This pilot project ran across three teaching periods on a total of 29 units, reaching almost 4,000 students. These units span across the undergraduate and postgraduate levels, in the Faculties of Science Engineering and Building Environments (SEBE), Arts and Education, and Business and Law. The number of unde rgraduate units in the pilot was 8, while there were 17 postgraduate units involved. However, undergraduate units had more students, with 70% of the total number of students with access to the tool being undergraduates. The writing tasks the tool has been piloted on are diverse and include thesis, research reports, project reports, autoethnographies, essays and reflective tasks.

Findings
In undergraduate units, the tool was used on average by 13% of the students. In postgraduate units, the tool was used on average by 12% of the students. However, in postgraduate units, there was a big variation in the use of this tool across different units, ranging from no use to 34% of students using it. In both, undergraduate and postgraduate units, the nature of the task did not appear to have an impact on the use of the tool, for example, long academic writing tasks did not result in more use of the tool than small reflective pieces.
The evaluation of the pilots with the 29 units in consultations with the university teachers and the usage data of the AI tool suggest the following 6 key observations: • proactive, high achieving students use the tool more • most common numbers of submissions are one, two or three submissions (in that order) • multiple submissions are commonly done in a single day • students commented on usefulness of feedback received • students highlighted errors/inaccurate feedback • students rated the feedback, with the average rating being very positive In what follows, we will discuss our reflections and inferences in more detail.

Discussion
The pilot of the AI powered automated feedback tool, provided students the option of using the tool to seek immediate feedback on their academic writing. It was observed that a relatively small number of students ASCILITE 2022 The University of Sydney e22039-4 (13%) used the tool given the formative nature of the task. Those students who decided to use the tool were identified by their teachers as mostly proactive, high achieving students. This aligns with the fact that engaging with this optional tool requires student agency and evaluative judgement skills. Students need to be willing to elicit information to improve their own learning and be committed to feedback as improvement. A single submission was most observed, followed by two or three submissions. Multiple submissions can be a proxy of students taking on the feedback provided, and they usually occurred in a space of 2 minutes to 5 hours. This indicates that some students not only commit to feedback as improvement but also appreciate feedback as an active and ongoing process. These observations evidence how this tool positions feedback as a studentcentered activity rather than a teacher's task and highlights the importance of student agency in this process. In fact, this indicates that the use of the tool, or lack thereof, is not influenced by stud ents' capability of processing feedback information as much as on their appreciation of feedback for improving their writing skills.
Students' interactions with the tool were possible and encouraged for each feedback instance through options for rating the feedback, commenting on its usefulness, and highlighting any errors or inaccurate feedback. All these options were used by students whereby students had the opportunity to provide their feedback on how useful the AI powered feedback was. This indicates that some students acknowledge feedback as a reciprocal process and that some of them feel capable of processing the feedback information and judging its quality or correctness through the application and calibration of their evaluative judgement. Importantly, this tool aims to remove the power structure from the feedback interaction, potentially promoting students to be less emotional and more critical, thus better placed to make evaluative judgements of the feedback in relation to their own work. These, combined with the affordances of this tool, make it ideal for using it to develop students' feedback literacy capabilities such as acknowledging feedback as a reciprocal process, processing the feedback information, and acknowledging and regulating their emo tions.

Conclusions, implications for future work
The pilots indicated that the AI tool affords students to develop and demonstrate the different dimensions of the learning-centred framework for feedback literacy (Molloy et al., 2020). However, not all students were able to act on these affordances. To scaffold students' feedback literacy, the use of the current tool needs to be incorporated in the learning design work by teachers in actualising these affordances. In other words, teachers need to make students aware of the different feedback literacy dimensions and include some guidance or strategies on how to enact them.
Distinct framings shape how learning and assessment activities are designed, what roles learners play, and what is valued in terms of improving learning outcomes (Kafai et al., 2020). By reframing the automated feedback tool to developing feedback literacy through addressing academic writing, this tool will not only have a greater impact on student academic writing but also on their academic writing skills and on their learning strategies in general. Into the future with the next iteration of the tool as improvements, the first step proposed is to make the tool student-facing. Currently, the teacher decides whether they will add the tool into their units and which assessment tasks it will be associated to. The parameters against which a task is analysed by the tool are also set by the teacher. The use of the tool for feedback is optional for students. We propose that the next iteration of the tool be made available in all units so that each student can decide what task they will use it for and the aspects of writing skills for which, they would like to seek feedback. Making it widely available across all units, with no teacher involvement, will further cultivate a space for developing student agency and feedback literacy in self-regulating their learning. Furthermore, this solution does not add workload or rely on teachers' expertise to embed feedback literacy systematically. It would only require some strategic points across the degree where more sophisticated teaching and practice of feedback literacy would need to be integrated. For example, by including assessment tasks that scaffold feedback as an active process by asking students to use the tool at a specific draft stage and demonstrate the incorporation of feedback into their final submission. The other feature recommended for inclusion in the next prototype is the inclusion of templates for three different drafting stages, guiding students through understanding the process of writing at each stage. This will raise awareness of the reciprocal nature of feedback and the role students need to play in it. It is expected that this will build an understanding of what an AI tool can and can't do, creating a partnership between student and AI in relation to feedback, a disposition that can be related to and extended to their teachers.
This pilot study has therefore provided preliminary insights on how the AI powered tool for automated feedback can be incorporated in the way that students engage with and develop feedback literacy and