Usability and user experience evaluation of Virtual Integrated Patient
Keywords:virtual patient, medical education, usability evaluation
Most existing Virtual Patients utilize simplistic, predictable, and prescriptive approaches that limit deductive learning and the development of decision-making skills for medical students. We have designed a chat-based virtual patient for performing patient interviews, physical examinations, and investigations to help medical students develop reasoning skills. In this paper, we present results from a two-part study. In the first part, we conducted a usability evaluation with seven medical students and six clinicians. The objectives of the usability evaluation was to determine how VIP’s user interface and its features affect the usability (efficiency, effectiveness and learnability) as well as the general subjective user experience associated with the use of system. Each participant completed a user experience, system usability scale, and a self-prepared survey form. A significant difference was seen between the results of students and tutors. Due to a lack of training data, the chatbot model predicted incorrect responses that led participants to feel frustrated. In the second part of study, we have retrained the chatbot model using the feedback and designed an error correction approach and engaged seven new medical students to test the chatbot intensively — a total of 2169 user interactions were performed with the chatbot. Of that, 77.4% were properly answered by the bot, 10.8 % were out-of-domain concepts, 8.6 % were unknown concepts (Li et al., 2018), 3.3 % were corrected using the error correction approach designed.
Copyright (c) 2019 Pabba Anubharath, Yoon Ping Chui, Judy Sng, Lixia Zhu, Kai Tham, Edmund Lee
This work is licensed under a Creative Commons Attribution 4.0 International License.