Question Dosage in MOOCs: An Empirical Investigation

Video lectures in online courses often use pop-up questions or quizzes to enhance learners’ engagement and their learning outcomes. However, there is limited research about how frequently pop-up questions should be asked, considering how this design decision may impact learners’ self-regulation and motivation to learn, as well as their perceptions of confidence and challenge associated with the learning content. This study aims to answer the above questions while considering learners’ final learning outcomes in a massive open online course (MOOC). In this study, learners were divided into high and low frequency groups. The high frequency group were given a question almost every 2 minutes whereas the low frequency group were given a question only once at the end of the video. No significant differences on any of the above factors were found between the two groups. Educators may take these results into account when designing MOOCs in the future. MOOCs are costly and time-consuming to create, and this experiment suggests that using resources on this level of detail may not be necessary.


Introduction
Lecture videos are the most common instructional method in flipped and hybrid courses as well as fully online courses such as Massive Open Online Courses (MOOCs).However, there is a growing concern that passive watching of video lectures may not be an effective way to promote learning and engagement (Poquet et al., 2018).Therefore, course developers have focused on designing digital video lectures that can promote engagement and improve learners' performance.For example, researchers have investigated design factors such as optimizing video lecture durations (Guo et al., 2014), embedding interactive elements into videos (Kovacs, 2016), the presence of a speaker head (Guo et al., 2014), and Socratic dialogues involving novices versus direct lecturing (Lodge et al., 2017).(Hansch et al., 2015) suggested that the best choices may not be universal across courses as they may differ based on subject matter, learning objectives, learning goals and student backgrounds.Instructors need to select the most appropriate practices for creating videos so that students may be best supported to achieve their desired learning outcomes, such as lesson planning at pre-production stage to keep the videos shorter (Ibrahim et al., 2012), displaying instructor's talking head at times during the video (Mayer & Moreno, 2003), using conversational style lectures (Mayer, 2008), arranging lectures to focus on first-watch experience and the tutorials to focus more on skimming and rewatching experiences (Guo et al., 2014).
Another strategy that can promote learners' engagement and positively influence their achievement is interpolating quizzes or pop-up questions into the videos (Obodo & Baskauf, 2015), particularly in MOOCs (Lackner et al., 2015).For example, in a study, students who watched a video lecture with frequent questions performed better on a post-test than those who watched a lecture without questions (Kreiner, 1997).Overall, pop-questions can have several advantages for learners.First, they can improve learners' engagement by making the video more interactive (Wachtler et al., 2016).Second, they can reduce learners' cognitive load by signalling what information is important (Geri et al., 2017).Third, pop-up questions can support active learning by prompting students to recall and retrieve important information from the video (Brame, 2016).Additionally, pop-up questions may encourage learners' note-taking behavior, and better self-assessment and may also boost their learning by reducing mind-wandering and anxiety related to test preparation (Szpunar et al., 2013).
While prior research has demonstrated the effectiveness of pop-up quizzes, there is, however, limited research on "how many" or "how often" questions should be given to the students so that their attention to and retention of learning from the video lectures may be supported.Therefore, this study addresses the following questions.Does the frequency of in-video questions (i.e., question dosage), influence the effectiveness of a course? 1. Do different dosages promote different levels of student motivation and self-regulation?2. How does dosage influence students' perceptions, such as the challenge associated with the content and students' confidence in that content knowledge?3. Does different dosage lead to different learning outcomes?

Methodology Study Design
This experimental study was conducted using an A/B testing design in a 5-week Coursera MOOC on microeconomics offered by a large North American private university.The existing version of the MOOC was slightly modified to arrange the video segment duration and the corresponding timing of the pop-up questions.The pop-up questions typically appeared in the middle of a lecture video to check learners' understanding.For A/B testing, learners were either assigned a version of the course with a high frequency of pop-up questions (approximately every two minutes), or a version with a low frequency of pop-up questions (an interval of 5-7 minutes).All pop-up questions were multiple-choice questions.Most questions were quick and easy, such as information recall, but some required more thought, such as inference and knowledge application.For each question, the method needed to solve the problem could be found in the associated segment of the video directly before the question.
Over the duration of the course, learners' data was collected about their motivation to learn and their selfregulated learning strategies (SRL), such as how they plan, monitor, and regulate their learning.Questions were drawn from the Motivated Strategies for Learning Questionnaire (MSLQ) by (Pintrich & De Groot, 1991) and were slightly modified for the domain 1 .In addition, learners' weekly perception surveys based on prior work by (Nawaz et al., 2022) were also collected, where each week, the learners were asked how confident and challenged they felt that week about the learning content.Lastly, a demographics survey was also conducted, asking learners about their age, gender, employment status, education level, prior experience with MOOCs, English proficiency, country of residence and background knowledge in Microeconomics.In addition to the mentioned survey data, clickstream data were also analysed.The clickstream data consisted of learners' behavioural patterns such as time on task, task attempts, quiz and assignment completion, scores on quizzes and final grades.
To assess the relationship between question dosage and learners' motivation and SRL, questions from the precourse and post-course survey were categorized as relating to motivation or SRL 2 .To look at the overall levels of motivation and SRL, responses to these questions were averaged, giving each learner one motivation and one SRL score per survey.
To assess the relationship between question frequency and learners' perceptions of course difficulty, learners were classified as perceiving the course to be easy, medium or hard.Each week, learners were asked how confident they felt and how challenged they were about the learning material.Learners who reported a higher level of confidence than challenge (e.g., "very confident" and "somewhat challenging") were classified as finding the course to be easy.Learners who reported a similar level of confidence and challenge (e.g., "somewhat confident" and "somewhat challenging") were classified as finding the course to be medium, and finally, learners who reported a lower level of confidence than challenge (e.g., "somewhat confident" and "very challenging") were classified as finding the course to be hard.When designing a course, ideally, educators should aim that most learners find the content to be medium, meaning that they are adequately challenged without finding the course too easy or too hard (Csikszentmihalyi, 1990).
To assess how question frequency impacted learner achievement or outcomes, the scores of the first attempts at the final exam were compared between the high and the low frequency groups.

Data
1 For motivation to learn, examples of items rephrased from the MSLQ include, B9 (I find the contents of this course to be personally meaningful) and B25 (I want to learn as much as possible from this course).For SRL, examples of items include, D32 (When working on this course, I will ask myself questions to make sure I understand the materials), D43 (Before I start new activities, I often skim them to see how they are organised).
2 Further information on pre-course and post-course surveys will be provided in appendices.
1467 learners participated in the study, with 713 in the low frequency course version and 754 in the high frequency version3 .For the two groups, the completion rates based on passing the final exam were comparable i.e., 166 of 713 learners (23.3%) completed the low frequency course version, and 170 of 754 learners (22.5%) completed the high frequency course version.
Of the 1467 participants, 46% were women and 54% were men.The vast majority of learners were under the age of 35: 17% were under 18, 41% were 18-24, and 23% were 25-34.71% of learners spoke a first language other than English, but 69% of learners were fluent in English whether or not it was their first language.22% of learners had a Graduate degree, 37% of learners had a Bachelor's degree, 12% had some post-high school education but no degree, 20% had a high school diploma, and 9% had less than a high school diploma.Most learners were either employed full-time (40%) or were not employed and not looking for work (34%).49% of learners had never taken a MOOC prior to this course.There were no statistically significant demographic differences between the learners in the low and high frequency versions of the course for any variable.

Q1-Motivation -Self-regulated learning:
As learners began the MOOC, based on their pre-course survey responses, there was no significant difference between those from the low frequency (M = 2.41, SD = 0.64) and those from the high frequency course versions (M = 2.41, SD = 0.63) in terms of their motivation (t = -0.03,df = 977.57,p = 0.98).There was also no difference in SRL scores between the low frequency (M = 2.07, SD = 0.67) and the high frequency (M = 2.11, SD = 0.69) groups (t = -0.76,df = 969.94,p = 0.45).Correspondingly, in terms of learners' post-course motivation, no significant difference was found between the low (M = 2.28, SD = 0.76) and the high frequency (M = 2.31, SD = 0.76) versions (t = -0.32,df = 220.93,p = .75).There was also no difference for post-course SRL scores between the low (M = 1.75, SD = 0.94) and the high frequency (M = 1.89,SD = 0.91) versions (t = -1.16,df = 219.44,p = 0.25).This null result also holds when looking at each motivation and SRL question individually, suggesting that question frequency did not impact learners' motivation and SRL.

Q2-Self-Perceptions of Difficulty:
Looking at the course as a whole, there was no significant difference between learners' perceptions of difficulty between the high and the low frequency versions ( 2 (df=2) = 3.709, p = 0.156).Considering each individual week of the course, the dosage of in-video questions did potentially impact learners' perceptions of difficulty in the first two weeks, but not in the rest of the course.For example, in week 1, this difference in perception between the high frequency and the low frequency groups was marginally significant ( 2 (df=2) = 4.6184, p = 0.0993), and in week 2, the difference was statistically significant ( 2 (df=2) = 8.217, p = 0.016).For both weeks 1 and 2, the learners in the high frequency (i.e., high question dosage) group were more likely to find the difficulty of the course as medium than the learners in the low frequency group.This seems to suggest that having a slightly higher question dosage may be beneficial as it could support learners' perceptions towards the course where they may perceive the difficulty to be just the right amount.The overall proportion of learners who felt the course was easy, medium, and hard in weeks 1 and 2 are shown in Table 1.Overall prior literature (Piaget, 1985;Vygotsky, 1978) has shown that to support learners in their learning journey the content should be designed in a way that it is neither too easy (that the learners become bored and disengaged) nor too hard (that the learners struggle repeatedly and ultimately give up) (Baker et al., 2010;Nawaz et al., 2018).Under optimal learning conditions, the content should be moderately difficult, meaning that the learners are adequately challenged (Nawaz et al., 2020).The findings above show that the question-dosage intervention improved the learners' experience during the first two weeks of the course, after which, no significant difference was found in learners' perceptions.It is possible that there was something unique about the nature of this course or the design of the in-video questions that led to the intervention being most appropriate for the material in the first two weeks.Considering the low retention rate in MOOCs (where learners may quit participating in a MOOC), it is also possible that the learners who continued beyond the initial two weeks were more motivated and more self-regulated and resultingly, beyond the first two weeks they were not impacted by in-video quiz questions.

Q3-Learning Outcomes:
In the low frequency version, 168 learners took the final exam with an average score of 76.22%.In the high frequency version, 174 learners took the final with an average score of 72.61%.There was no significant difference in learning outcomes between the high (M = 72.61,SD = 26.89)and the low frequency (M = 76.22,SD = 23.92)versions (t = -1.31,df = 337.76,p = 0.19).
Ultimately, we found little evidence that having an in-video question every 2 minutes (rather than one question at the end of each video) improves learners' experience in a MOOC.Educators may take these results into account when designing MOOCs in the future.MOOCs are costly and time-consuming to create, and this experiment suggests that using resources on this detail may not be necessary.Though courses may differ based on subject matter, learning objectives, learning goals and students, this question should be further investigated across other MOOCs to determine whether the findings are consistent.Another way to expand this study could be to carry out these investigations in non-MOOC settings such as fully online subjects offered by universities as part of a certificate or degree program.This would allow researchers to analyse question dosage influence on learners' weekly perceptions while disentangling the likely low retention effects of MOOCswhich are often linked to learners' prior experience, skills and abilities such as lower SRL and lower motivation to learn (Aldowah et al., 2020).

Conclusion:
Designing effective digital video lectures that promote student engagement and improve learning outcomes requires careful use of interactive elements, personalization, storytelling, and collaboration.While interactive elements such as questions can be effective, we found little evidence that having an in-video question every 2 minutes, rather than one question at the end of each video, improves learners' experience in a MOOC.MOOCs are costly and time-consuming to create, and this experimental study suggests that using resources on this detail may not be necessary.