A portrait of evaluation studies of learning technology innovations 2005-2010
Addressing the elephant in the room
DOI:
https://doi.org/10.14742/apubs.2011.1802Keywords:
Evaluation studies, research design, evidence, impact, meta-analysisAbstract
Much effort has gone into the development of evaluation methods for learning technology. Yet the mantra remains the same; that studies fail to produce reliable evidence to answer important questions about the impact of technology on student learning and behaviour. The authors conducted a meta- analysis of 100 evaluation studies published in two leading learning technology journals from 2005 - 2010. A set of thirteen criteria to critique the articles was derived from the principles of educational design research. This paper discusses findings concerned with the extent to which studies are a) theoretically grounded, b) show evidence of impact on student learning and behaviour and c) potentially transferable to other higher education contexts. The findings resonate with comments in a recent report on Learning with Technology (ALT 2010) that 'research typically doesn't address the problem of building an ecology of learning, or treat integration of the innovation as a research issue' (p.5). The authors are keen to discuss ways to improve the quality of evaluation studies in learning technology for the future. Some recommendations are proposed to stimulate discussion and feedback.
Downloads
Published
Issue
Section
License
Copyright (c) 2024 Caroline Steel, Cathy Gunn
This work is licensed under a Creative Commons Attribution 4.0 International License.