IDE vs. pen-and-paper showdown - Performance and perceptions of testing first-year IT students’ programming skills
DOI:
https://doi.org/10.14742/apubs.2023.541Keywords:
Programming, Assessment, Paper vs ComputerAbstract
This study analysed first-year IT students' performance and perceptions of using Integrated Development Environment (IDE) versus pen and paper (P&P) based programming testing methods. 68 participants completed two programming tests using both testing methods, and the results were evaluated on complexity, quality and other criteria. No significant difference was observed in the average scores between the two test methods. However, IDE testing showed fewer syntax errors and lower cyclomatic complexity, indicating the potential advantages of IDE's error-checking. IDE tests exhibited higher variability in student performance, suggesting varying comfort levels with the test environment. Additionally, a weaker correlation existed between actual and perceived performance in IDE tests, highlighting challenges in self-evaluation within a digital framework. Most students preferred IDE-based testing, citing efficiency, user-friendliness, and real-time feedback. Nonetheless, a minority recognised the value of P&P tests for demonstrating deep syntax understanding. This research enhances our understanding of programming pedagogy and assessment.
Downloads
Published
Issue
Section
License
Copyright (c) 2023 David Harris, Garry Stewart Von Itzstein, Douglas Kelly, Elizabeth Smith
This work is licensed under a Creative Commons Attribution 4.0 International License.