Artificial Intelligent Branching Simulations (AIBS) in Critical Care Neonatal Nursing

Authors

DOI:

https://doi.org/10.14742/apubs.2024.1170

Keywords:

Branching Scenario, AI, AIBS, LLM, simulation, authentic learning

Abstract

Roleplaying, simulations and branching scenarios are among the most authentic, critical and effective educational experiences, though have seen limited adoption due to the expense of creating, staffing, adapting and sustaining them over time (Davies, 2013). The new generation of large language models have made possible new affordances that enable delivering scenario based roleplaying experiences more sustainably. This Pecha Kucha charts the development process, frontier technology and the positive outcomes of an artificial, intelligent branching simulation (AIBS) a in critical care neonatal nursing simulation.

Critical care disciplines and nursing have a tradition of using expensive in-person simulations and labor-intensive, high-stress scenarios to prepare students for the workplace (Jefferies, 2020). After such training, nurses undergo rigorous, high pressure in-person evaluations, such as the ANSAT (Ossenberg, Mitchell & Henderson, 2020), which they must pass in order to practice. Foundational reading about nursing practices and reviewing practice manuals is a step towards passing such evaluations, but these more theoretical learning practices don’t constructively align with the situational, authentic skills that are demanded in practice. With few learning experiences that can scaffold towards critical in-person evaluation, students often report high levels of anxiety and that they feel inadequately prepared for in-person evaluation, sometimes failing and leaving their studies and aspirations entirely (Cornine, 2020). Mindful also of an industry wide shortage of critical care nurses, there is a clear need to find bridging solutions that can affordably and effectively prepare trainee nurses for demanding in-person workplace evaluations.

This Pecha Kucha charts the development of AIBS that was designed to bridge a learning gap between more theoretical training methods and high stress in-person training evaluations. The increasing popularity of conversational agents powered by large language models on web-based platforms like Character.ai (Sarkar, 2024), suggests that students might find a chatbot an approachable, engaging and an increasingly familiar experience. Initial tests directly prompting ChatGPT and Claude.ai both showed promise, but also raised some issues around reliability which mandated more direct and qualified oversight of teaching staff. A technical solution was required as an intermediary between the students and the LLMs that enabled ‘human-in-the-middle’ (Mollick, 2024) oversight. After an extensive review, the educator created tool ‘Cogniti’ was selected due to it’s ability to provide a seamless, pretrained scenario chat experience, while also enabling student account integration, data governance, supervisor oversight and privacy controls.

The prompts used to instruct the Cogniti AI agent were developed using the RTRI prompting framework (Lui 2023, Hardman, 2023). By separating parts of the scenario into sequential chunks, each with a series of ‘expected responses’ it became possible to create and align an improvisational prompting script that resembled a sequence from a typical hospital procedures manual and that aligned to the prescribed ANSAT placement evaluation framework (Ossenberg, Mitchell & Henderson, 2020). Allocating specific roles to the AI agent and to the trainee nurse within a framework of expected responses enabled the narrative to resemble an authentic interaction, but within a safe environment. Prompting the AI agent to progress the narrative under specific conditions and to provide structured feedback to students, enabled a cohesive educational experience that was able to address specific gaps in student knowledge of neonatal practice and procedure. Staff and student feedback demonstrated positive results in terms of engagement, satisfaction, academic performance and student well-being.

Author Biography

Dan Laurence, The University of Melbourne

A Lead Learning Designer, technologist, occasional teacher and researcher in the field of education. In 2015 and again in 2017 awarded the Vice Chancellor’s Teaching Excellence award at Swinburne University. Awarded another Vice Chancellors Award for Teaching excellence from La Trobe University in 2019, in 2017 the AFR Educational Technology Award and a finalist in the 2022 ASCILITE Battle of the LMS.

Bringing an amalgam of pedagogical, design/usability and technological expertise that can be applied on a practical level to enable the co-design of quality student experiences.

Downloads

Published

2024-11-11

Issue

Section

ASCILITE Conference - Pecha Kuchas