Given the simultaneous rise in time-to-graduation and college GPA, it may be that students reduce their course load to improve their performance. Yet, evidence to date only shows increased course loads increase GPA. We provide a mathematical model showing many unobservable factors -- beyond student ability -- can generate a positive relationship between course load and GPA unless researchers control student schedules. West Point regularly implements the ideal experiment by randomly modifying student schedules with additional training courses. Using 19 years of administrative data, we provide the first causal evidence that taking more courses reduces GPA and increases course failure rates, sometimes substantially.
Using administrative data from D.C. Public Schools, I use exogenous variation in the presence and intensity of teacher monitoring to show it significantly improves student test scores and reduces suspensions. Uniquely, my setting allows me to separately identify the effect of pre-evaluation monitoring from post-evaluation feedback. Monitoring's effect is strongest among teachers with a large incentive to increase student test scores. As tests approach, unmonitored teachers sacrifice higher-level learning, classroom management, and student engagement, even though these pedagogical tasks are among the most effective. One possible explanation is teachers ``teach to the test'' as a risk mitigation strategy, even if it is less effective on average. This is supported by showing teaching to the test has a smaller effect on student test score variance than other teaching approaches. These results illustrate the importance of monitoring in contexts where teachers have the strongest incentive to deviate from pedagogically sound practices.