Findings from a survey looking at attitudes towards AI and its use in teaching, learning and research
DOI:
https://doi.org/10.14742/apubs.2023.537Keywords:
Artificial Intelligence, ChatGPT, Generative AI, Student attitudes, Staff attitudesAbstract
Artificial Intelligence (AI) is having an advancing dramatic impact on Technology Enhanced Learning (TEL) in Higher Education. (Popenici & Kerr, 2017) observed an emergence of the use of AI in HE (Higher Education) and pinpointed challenges for institutions and students including issues of academic integrity, privacy and “the possibility of a dystopian future” (p. 11). Potential benefits of AI in HE includes creating learning communities through chatbots (Studente & Ellis, 2020), automated grading, individualized learning strategies and improved plagiarism detection (Owoc et al., 2019). It is unclear how often, and in what manner, students are engaging with AI during their learning and in creating submissions for assessments tasks and if this engagement is creating unrealistic outcomes. It is also unclear how educators are engaging with AI during their teaching and curriculum/assessment design and how this may be impacting the learning outcomes of their cohorts. This research study was conducted to investigate the perceived immediate and long-term implications of engaging with AI of both staff and students on learning and teaching within the University of Adelaide.
The design of the research study is underpinned by a blended approach combining Situational Ethics and Planned Behavior Theory to understand the ethical considerations and behavioral activities and future intentions of staff and students regarding the use of AI. Situational Ethics provides a framework for examining the contextual nature of ethical decision-making regarding AI (Boddington, 2017; Memarian & Doleck, 2023). Planned Behavior Theory provides understanding of individuals' motivation and rationalization to engage with AI (Wang et al., 2022). By employing a mixed qualitative and quantitative design, collecting data via online surveys, the study's findings shed light on the ethical challenges and attitudes associated with AI implementation in higher education and provided insights into the factors that influence staff and students’ individual intentions to engage with AI technologies in Learning and Teaching.
Participants from all faculties across a wide diversity of student cohorts and staff responded to the surveys. Initial findings reveal educators are suspecting a greater student use of AI than the data demonstrates. The most frequent use of AI by students is for checking grammar and this is more prominent in the international student cohort. Students trust their human educators more than AI for course content and feedback on assessments. Educators are comfortable using AI but feel also they need greater support and training. The majority of students (70%, n=126) are not concerned about the implications of using Generative AI in higher education, regarding issues related to privacy, bias, ethics, or discrimination. However, demonstrating an active concern in this field, the most common use of AI by university staff is to test its capabilities to complete assignments. These and other findings from the study can provide guidance to staff and students by describing current practices and making recommendations regarding assessment, curriculum design, and Learning and Teaching (L&T) activities.
Downloads
Published
Issue
Section
License
Copyright (c) 2023 Edward Palmer, Daniel Lee, Matthew Arnold, Dimitra Lekkas, Katrina Plastow, Florian Ploeckl, Amit Srivastav, Peter Strelan
This work is licensed under a Creative Commons Attribution 4.0 International License.