- Isaac M. Opper
Search EdWorkingPapers by author, title, or keywords.
Isaac M. Opper
What happens when employers screen their employees but only observe a subset of output? We specify a model with heterogeneous employees and show that their response to the screening affects output in both the probationary period and the post-probationary period. The post-probationary impact is due to their heterogeneous responses affecting which individuals are retained and hence the screening efficiency. We show that the impact of the endogenous response on both the unobserved outcome and screening efficiency depends on whether increased effort on one task increases or decreases the marginal cost of effort on the other task. If the response decreases unobserved output in the probationary period then it increases the screening efficiency, and vice versa. We then assess these predictions empirically by studying a change to teacher tenure policy in New York City, which increased the role that a single measure -- test score value-added -- played in tenure decisions. We show that in response to the policy teachers increased test score value-added and decreased output that did not enter the tenure decision. The increase in test score value-added was largest for the teachers with more ability to improve students' untargeted outcomes, increasing their likelihood of getting tenure. We estimate that the endogenous response to the policy announcement reduced the screening efficiency gap -- defined as the reduction of screening efficiency stemming from the partial observability of output -- by 28%, effectively shifting some of the cost of partial observability from the post-tenure period to the pre-tenure period.
Community schools are an increasingly popular strategy used to improve the performance of students whose learning may be disrupted by non-academic challenges related to poverty. Community schools partner with community based organizations (CBOs) to provide integrated supports such as health and social services, family education, and extended learning opportunities. With over 300 community schools, the New York City Community Schools Initiative (NYC-CS) is the largest of these programs in the country. Using a novel method that combines multiple rating regression discontinuity design (MRRDD) with machine learning (ML) techniques, we estimate the causal effect of NYC-CS on elementary and middle school student attendance and academic achievement. We find an immediate reduction in chronic absenteeism of 5.6 percentage points, which persists over the following three years. We also find large improvements in math and ELA test scores – an increase of 0.26 and 0.16 standard deviations by the third year after implementation – although these effects took longer to manifest than the effects on attendance. Our findings suggest that improved attendance is a leading indicator of success of this model and may be followed by longer-run improvements in academic achievement, which has important implications for how community school programs should be evaluated.
There is an emerging consensus that teachers impact multiple student outcomes, but it remains unclear how to measure and summarize the multiple dimensions of teacher effectiveness into simple metrics for research or personnel decisions. We present a multidimensional empirical Bayes framework and illustrate how to use noisy estimates of teacher effectiveness to assess the dimensionality and predictive power of teachers' true effects. We find that it is possible to efficiently summarize many dimensions of effectiveness and most summary measures lead to similar teacher rankings; however, focusing on any one specific measure alone misses important dimensions of teacher quality.
We show that natural disasters affect a region’s aggregate human capital through at least four channels. In addition to causing out-migration, natural disasters reduce student achievement, lower high school graduation rates, and decrease post-secondary attendance. We estimate that disasters that cause at least $500 in per capita property damage reduce the net present value (NPV) of an affected county’s human capital by an average of $505 per person. These negative effects on human capital are not restricted to large disasters: less severe events – disasters with property damages of $100-$500 per capita – also cause significant and persistent reductions in student achievement and post-secondary attendance.
We consider the case in which the number of seats in a program is limited, such as a job training program or a supplemental tutoring program, and explore the implications that peer effects have for which individuals should be assigned to the limited seats. In the frequently-studied case in which all applicants are assigned to a group, the average outcome is not changed by shuffling the group assignments if the peer effect is linear in the average composition of peers. However, when there are fewer seats than applicants, the presence of linear-in-means peer effects can dramatically influence the optimal choice of who gets to participate. We illustrate how peer effects impact optimal seat assignment, both under a general social welfare function and under two commonly used social welfare functions. We next use data from a recent job training RCT to provide evidence of large peer effects in the context of job training for disadvantaged adults. Finally, we combine the two results to show that the program's effectiveness varies greatly depending on whether the assignment choices account for or ignore peer effects.
Researchers often include covariates when they analyze the results of randomized controlled trials (RCTs), valuing the increased precision of the estimates over the potential of inducing small-sample bias when doing so. In this paper, we develop a sufficient condition which ensures that the inclusion of covariates does not cause small-sample bias in the effect estimates. Using this result as a building block, we develop a novel approach that uses machine learning techniques to reduce the variance of the average treatment effect estimates while guaranteeing that the effect estimates remain unbiased. The framework also highlights how researchers can use data from outside the study sample to improve the precision of the treatment effect estimate by using the auxiliary data to better model the relationship between the covariates and the outcomes. We conclude with a simulation, which highlights the value of using the proposed approach.