Transformative Assessment Review

Integrating AI Competence into Higher Education Frameworks

Authors

  • Kate Tran Polytechnic Institute Australia
  • Nassima Kennedy Polytechnic Institute Australia
  • Zohreh Moghaddas Polytechnic Institute Australia

DOI:

https://doi.org/10.14742/apubs.2024.1323

Keywords:

Generative AI, AI Competence, Assessment Review, Quality Assessment Framework, AI Literacy, Ethical AI Use, Assessment Integrity

Abstract

The rapid advancement of generative artificial intelligence (AI) in education and the workforce necessitates that students acquire critical competencies, including digital literacy, data integrity, and ethical AI use, to navigate an increasingly AI-driven world. Equally, academic staff must be upskilled to effectively guide students in the ethical and practical applications of AI. This project examines how a private higher education (HE) institute has modified its Quality Assessment Framework (QAF) to incorporate an AI competence dimension, improve HE assessment and foster students’ competencies. This new dimension complements the existing QAF elements—Intellectual Quality, Significance, and Student Support to ensure a holistic approach in preparing students for AI-integrated academic and professional environments.

This project’s framework and innovative tools, such as the enhanced QAF and an AI-driven Generative Pre-trained Transformer (GPT) assessment review tool, support educators in adapting curriculum and assessment design, and empower them to integrate AI competencies seamlessly into the curriculum. The development of these tools drew upon established frameworks, particularly the QAF (Gore et al., 2009) and the AI Assessment Scale (AIAS) by Perkins et al. (2024), which categorises AI usage across five levels, guides assessment reviews and ensures consistent standards for AI integration in student work.

A trial, conducted across selected disciplines at the institute, involved academic staff as assessment reviewers and utilised the GPT tool to streamline the assessment review process by aiding in feedback provision, coding, and suggestions for improvement. Evaluation of the tool includes qualitative and quantitative methodologies, gathering academic feedback on usability, clarity, and effectiveness, as well as comparative studies to assess review time and quality before and after GPT integration. Usability testing evaluates workflow compatibility, while academic performance data provide insights into the tool’s impact on student outcomes. Initial results on the tool’s success in enhancing assessment alignment with institutional goals and fostering a comprehensive understanding of AI competence were also examined.

Developing and implementing these tools has presented challenges, including the complexity of aligning them with diverse curriculum needs across disciplines, the time investment required for staff training, and the necessity of continuous updates to keep pace with AI technology. Addressing these challenges has been crucial to ensuring the effectiveness and sustainability of these solutions.

This digital poster provides insights into the project, covering its development process, trial outcomes, challenges encountered, and future directions for integrating AI competence in higher education assessments.

Downloads

Published

2024-11-11

Issue

Section

ASCILITE Conference - Posters