- Claire Mackevicius
Search EdWorkingPapers by author, title, or keywords.
Research shows that teachers seek out jobs close to home, but previous studies have been unable to test whether proximity to home is related to retention in the teaching profession. We leverage a unique dataset from Teach For America (TFA) linking individuals’ preferred teaching locations, actual teaching locations, and years in teaching for 7 years after entering the profession. By controlling for a detailed set of background, preference, and teaching assignment variables through a matched fixed effects design, we find that individuals who were assigned to a TFA region in their home state taught, on average, for .15 years longer than those who were not assigned to teach in their home state. This effect is strongest for teachers of color and those from a low-income background. Being assigned to teach in one’s home state is associated with .36 more years in teaching for those from low-income backgrounds and .47 more years in teaching for teachers of color. Both sub-groups are approximately 8 percentage points more likely to stay in teaching for 7 or more years if assigned to their home state. Overall, this study provides evidence of a positive home state effect on teacher retention. Our results lend support for policies and programs that recruit from or nudge teachers toward teaching in their home states, particularly through alternative certification pathways, and as a means to increase teacher diversity.
We examine all known "credibly causal" studies to explore the distribution of the causal effects of public K-12 school spending on student outcomes in the United States. For each of the 31 included studies, we compute the same marginal spending effect parameter estimate. Precision-weighted method of moments estimates indicate that, on average, a $1000 increase in per-pupil public school spending (for four years) increases test scores by 0.0352 standard deviations, high school graduation by 1.92 percentage points, and college-going by 2.65 percentage points. These pooled averages are significant at the 0.0001 level. When benchmarked against other interventions, test score impacts are smaller than those on educational attainment -- suggesting that test-score impacts understate the value of school spending.
The benefits to marginal capital spending increases take about five years to materialize, and are about half as large as (and less consistently positive than) those of non-capital-specific spending increases. The marginal spending impacts for all spending types are less pronounced for economically advantaged populations -- though not statistically significantly so. Consistent with a cumulative effect, the educational attainment impacts are larger with more years of exposure to the spending increase. Average impacts are similar across a wide range of baseline spending levels and geographic characteristics -- providing little evidence of diminishing marginal returns at current spending levels.
To assuage concerns that pooled averages aggregate selection or confounding biases across studies, we use a meta-regression-based method that tests for, and removes, certain biases in the reported effects. This approach is straightforward and can remove biases in meta-analyses where the parameter of interest is a ratio, slope, or elasticity. We fail to reject that the meta-analytic averages are unbiased. Moreover, policies that generate larger increases in per-pupil spending tend to generate larger improvements in outcomes, in line with the pooled average.
To speak to generalizability, we estimate the variability across studies attributable to effect heterogeneity (as opposed to sampling variability). This heterogeneity explains between 76 and 88 percent of the variation across studies. Estimates of heterogeneity allow us to provide a range of likely policy impacts. Our estimates suggest that a policy that increases per-pupil spending for four years will improve test scores and/or educational attainment over 90 percent of the time. We find evidence of small possible publication bias among very imprecise studies, but show that any effects on our precision-weighted estimates are minimal.