Press question mark to see available shortcut keys

"Can Behavioral Tools Improve Online Student Outcomes? Experimental Evidence from a Massive Open Online Course", Patterson 2014; excerpts:

"One explanation for poor performance in online courses is that aspects of the online environment lead students to procrastinate, forget about, or be distracted from coursework. To address student time-management issues, I leverage insights from behavioral economics to design three software tools including (1) a commitment device that allows students to pre-commit to time limits on distracting Internet activities, (2) a reminder tool that is triggered by time spent on distracting websites, and (3) a focusing tool that allows students to block distracting sites when they go to the course website. I test the impact of these tools in a large-scale randomized experiment (n=657) conducted in a massive open online course (MOOC) hosted by Stanford University. Relative to students in the control group, students in the commitment device treatment spend 24% more time working on the course, receive course grades that are 0.29 standard deviations higher, and are 40% more likely to complete the course. In contrast, outcomes for students in the reminder and focusing treatments are not statistically distinguishable from the control.

In higher education, only 59% of students complete the degree programs they begin

While little work has been done to investigate the impact of commitment devices and reminders in education, there is evidence of the effectiveness of commitment devices and reminders in other settings. Commitment devices have been shown to significantly improve effort at work, savings behavior, and health behaviors (Ashraf et al., 2006; Bryan et al., 2010; Kaur et al., 2011). Additionally, recent studies have found significant positive impacts of reminders on savings behavior (Karlan et al., 2010) and health outcomes (Calzolari and Nardotto, 2012; Krishna et al., 2009; Austin et al., 1994) ...Commitment devices have been shown to significantly increase desired long-run behaviors including effort at work (Kaur et al., 2011), savings behavior (Ashraf et al., 2006; Thaler and Benartzi, 2004), and smoking cessation (Gin ́e et al., 2010). While there is limited evidence of the impact of formal commitment devices in education, Ariely and Wertenbroch (2002) find that students hired to proofread multiple papers over the course of three weeks performed significantly better at the task when given the option to set binding intermediate deadlines. 7

While the potential benefits of online education and MOOCs are large, completion rates in online education are often very low. For example, Xu and Jaggars (2011) find that observationally equivalent community college students are 10-15 percentage points less likely to complete online courses than traditional courses. At the University of Phoenix, the largest provider of online degrees in the United States, the graduation rate for full-time online students is only 19%. 3

Additionally, a number of studies find that students report self-regulation and time-management problems as primary reasons for failure in online courses (Doherty, 2006; Winters et al., 2008). While issues of self-regulation and time-management are likely to impact all students, aspects of the online learning environment may make students particularly susceptible to issues with time-management. Specifically, characteristics of the online course environment, such as anonymity (e.g. Kast et al., 2012) and unstructured scheduling (e.g. Ariely and Wertenbroch, 2002), make students prone to behaviors that could limit their ability to achieve their course goals.

There is evidence that people are overconfident in their ability to remember their plans. For example, Ericson (2011) finds that MBA students significantly overestimate their ability to remember to claim a payment in six months. Students’ decisions suggest an expectation of claiming payments 76% of the time, while only 53% of students actually claim the payment.

Participants for this study were recruited from enrollees in a nine-week Stanford statistics massive open online course (MOOC) which was held in 2014. This completely online course was administered by Stanford University on the Stanford OpenEdX platform. 10 Although the course was administered by Stanford, course enrollment was free, open to anyone worldwide, and provided no formal credit at Stanford University. Students, however, could receive a completion certificate or certificate with distinction by scoring at least 60% or 90% in the course, respectively. Scores for the course were composed of a multiple-choice final exam (45%), nine weekly homework assignments (45%), and participation in 53 short quizzes (10%). 11 The course content was primarily delivered in lecture videos and covered a number of topics in statistics including basic statistical measures, probability distributions, statistical inference, statistical tests, and regression analysis. 12 ...My primary sample consists of the 657 students who participated in the MOOC, completed a pre-study survey, and installed software prior to the first course assignment deadline (a participation rate of 18%). 13 This analysis excludes 120 students who completed the pre-study survey and installed software prior to the first assignment deadline, but never visited the course website. Assignment to treatment condition was uncorrelated with whether students ever visited the course website (F=0.5, p=0.68). Students were incentivized to participate with $12 in Amazon.com gift cards—$5 for completing the enrollment survey and installing time management software and $7 for using software and completing a post-study
survey.
Participants in this study were randomly assigned to one of four treatment groups:
(1) control, (2) commitment device, (3) reminder, and (4) focused study.
..To ensure that participants did not differentially select themselves into the study based on the treatment conditions, all students installed the same basic version of the software and were not informed of their software functionality until after they had successfully installed the software and completed the enrollment survey. 18 The particular functions of their treatment software were not turned on until the course started, or the day following their installation if they installed the software after the course began. 19
...I worked with RescueTime, a company that makes time-tracking software, to develop the software tools used in this study. RescueTime implemented the design for each tool and provided software support throughout the study...When running, this software tracked and categorized time spent in the active application or browser window. 20 Each activity was categorized into groups such as email, shopping, news, entertainment, social networking, writing, and education and activity received a productivity score of unproductive, neutral, or productive. [These categorizations and productivity scores were defined by RescueTime defaults. These defaults were set by an algorithm that combined website query information with aggregated user scores.] This information collected by the software was used to execute each of the treatment conditions described below...The software was programmed to automatically run when the participant’s computer was turned on. The software could not be closed from any menu option and could only be turned off by manually quitting the application from the computer’s task manager/activity monitor function. Activities were tracked logged at the application and web domain level, and keystrokes or actions taken within an application or pages within a web domain were not recorded. If multiple applications or browser tabs were open, the activity was attributed to the application or webpage with the most recent action. When a person stopped interacting with an application or website the software stopped tracking activity even when the application or website remained open...In addition to having access to time-use summary reports, students assigned to the commitment device treatment were able to set a limit on distracting Internet time each day.
To maximize the expected impact of the treatment, students were initially assigned a limit that corresponded to the goal stated in the pre-study survey. This approach leverages the tendency people have to stay with a default choice (Madrian and Shea, 2001, e.g) and bypasses the issue of na ̈ıve students being unwilling to initially opt into a commitment treatment. Participants in this treatment group were sent a daily email at 6:45 a.m. that informed them of their current limit and asked them whether they wished to reset their limit (see Appendix Figure 2 for an example of how students set their distracting limit). 23 Once students exceeded their set limit, distracting websites were blocked (blocked screen shown in Appendix Figure 3). After exceeding their limit, students were only able to unblock websites on a site-by-site basis and needed to indicate a reason for unblocking each site. The commitment device has the potential to address issues of present-biased preferences by allowing students to make future distracting computer use more costly.

While the impact of the reminder and focusing treatments on homework submission patterns are smaller than those estimated for the commitment device and statistically indistinguishable from the control, estimated effects for both groups are positive (0.27 and 0.58 additional homework assignments, respectively) and large effects cannot be ruled out for these groups.
The impact of the treatment tools on student outcomes corresponds closely with those estimated for effort. Column 3 of Table 2 shows that the commitment device improves total course performance by 0.29 standard deviations, which is significant at the 5% level. To provide some context, this is roughly the same difference in course performance observed between students with Ph.D.s or M.D.s and students with bachelor’s degrees (0.28 standard deviations, significant at the 1% level). In contrast, the reminder treatment has essentially no measured influence on course performance (an increase of 0.01 standard deviations) and the estimated impact of the focusing treatment is one-third the size of the commitment device (0.10 standard deviations) and statistically indistinguishable from the control. Finally, column 4 of Table 2 indicates that the commitment device has a large impact on course completion, increasing completion rates by 40% or 11 percentage points (significant at the 5% level). The reminder and focusing treatments, however, have no measurable impact on completion, with point estimates that are close to zero (both associated with 1 percentage point increase in completion) and that are significantly smaller than the estimated impact of the commitment device (both significant at the 10% level).

While the estimates are somewhat imprecise, Figures 1 and 2 show three interesting patterns. First, differences in effort between the commitment and control group, in terms of hours of course time and homework submissions, are largest at the beginning of the course but remain positive and significant for the majority of the course. Second, the reminder treatment appears to have no positive impact on course outcomes at any point during the study. Third and finally, the differences in effort between the focusing treatment and control are significant at the beginning of the course but then dissipate after the first two to three weeks.

To test whether expected course performance impacted the magnitude of the treatment response, I implement a split-sample endogenous stratification estimator as is outlined by Abadie et al. (2013). This estimation strategy uses students in the control group to generate predicted outcomes for students in all treatment groups (including the control) and then estimates the treatment effects within quantiles of predicted outcomes. To overcome the bias introduced by overfitting issues that arise when a student’s characteristics are used to predict their own outcomes, 36 this estimation strategy takes the following steps: (1) randomly select half the control group and use this group to estimate predicted outcomes with observable pre-study characteristics for the remainder of the students; (2) bin students into predicted outcome quantiles (excluding the students used to estimate predicted outcomes); (3) estimate treatment effects within quantile bins and store estimates; (4) iterate steps 1-3 multiple times; and (5) bootstrap standard errors. 37
I use the above strategy to estimate the impact of the treatments on effort, homework submissions, and points scored and present the results of this estimation in Table 4. These results suggest that the impact of the commitment device has a strong positive correlation with predicted outcomes. For each outcome—course hours, homework, and grades—the estimated impact of the commitment device increases with the quintile of predicted outcome.

Only 52% of study participants completed the post-study survey. Also, the first row in Table 5 indicates that survey response was not constant across treatment groups—a greater portion of students in the commitment device treatment responded to the survey than those in other treatments. Therefore, the results should be interpreted cautiously. Nevertheless, student responses do shed additional light on the potential mechanisms driving response to treatments. The most significant difference observed between treatment and control is that students in the commitment and reminder treatments much than those in the control group to report that the treatment made unproductive time less enjoyable. Students in the commitment and reminder treatments were 81% (23 percentage points, significant at the 1% level) and 61% (17 percentage points, significant at the 5% level) more likely than students in the control treatment to state that the software made unproductive time less enjoyable, respectively. That students in the commitment device treatment found unproductive time less enjoyable suggests that the commitment device worked in the way it was intended—making spending time on unproductive websites more difficult or costly. Other differences were not statistically significant, but students in the commitment device treatment were most likely to report that the software increased the time they spent on the course and to report using the software to set course goals. The results of the post-study survey are consistent with students using the commitment device to address present-biased preferences—making distracting time more costly in order to increase the amount of time spent on coursework. 38"

#procrastination #mooc #rescuetime #psychology  
Shared publiclyView activity