TY - JOUR AB - As school districts focus on improving learning, they can learn not only from when and where interventions work—but also from why they sometimes do not. Policymakers widely embraced high-impact tutoring as an evidence-supported strategy to address learning delays from the COVID-19 pandemic. However, scaling these promising practices can be difficult, and not all implementations will be effective. Many districts have turned to third-party virtual tutoring providers to deliver student supports during the school day. Using random assignment, we evaluate the impacts of one such program for 3rd through 8th grade students in a suburban Texas school district. Compared with students assigned to the comparison interventions, we find no effect of assignment to virtual tutoring on math achievement, and, for reading, we find a moderate negative effect on the state end-of-year assessment (i.e., -0.09 SD) and no effect on a low-stakes exam. Drawing from frameworks for interpreting null or unexpected results in education experiments, we find further evidence of subject-specific heterogeneity in the implementation and efficacy and identify coverage of standards-aligned material as a moderator of estimated effectiveness relative to “business-as-usual” interventions. This paper offers strategies to identify factors contributing to null or unexpected results and highlights implications for designing policy-relevant studies to assess educational interventions. AU - Huffaker, Elizabeth AU - Robinson, Carly D. AU - Bardelli, Emanuele AU - White, Sara AU - Loeb, Susanna PY - 2025 ST - When interventions don’t move the needle: Insights from null results in education research TI - When interventions don’t move the needle: Insights from null results in education research UR - http://www.edworkingpapers.com/ai25-1295 ER -