- Eric S. Taylor
Search EdWorkingPapers by author, title, or keywords.
Eric S. Taylor
We study the returns to experience in teaching, estimated using supervisor ratings from classroom observations. We describe the assumptions required to interpret changes in observation ratings over time as the causal effect of experience on performance. We compare two difference-in-differences strategies: the two-way fixed effects estimator common in the literature, and an alternative which avoids potential bias arising from effect heterogeneity. Using data from Tennessee and Washington, DC, we show empirical tests relevant to assessing the identifying assumptions and substantive threats—e.g., leniency bias, manipulation, changes in incentives or job assignments—and find our estimates are robust to several threats.
We study teachers’ choices about how to allocate class time across different instructional activities, for example, lecturing, open discussion, or individual practice. Our data come from secondary schools in England, specifically classes preceding GCSE exams. Students score higher in math when their teacher devotes more class time to individual practice and assessment. In contrast, students score higher in English if there is more discussion and work with classmates. Class time allocation predicts test scores separate from the quality of the teacher’s instruction during the activities. These results suggest opportunities to improve student achievement without changes in teachers’ skills.
When an employee expects repeated evaluation and performance incentives over time, the potential future rewards create an incentive to invest in building relevant skills. Because new skills benefit job performance, the effects of an evaluation program can persist after the rewards end or even anticipate the start of rewards. I test for persistence and anticipation effects, along with more conventional predictions, using a quasi-experiment in Tennessee schools. Performance improves with new evaluation measures, but gains are larger when the teacher expects future rewards linked to future scores. Performance rises further when incentives start and remains higher even after incentives end.
This paper reports improvements in teacher job performance, as measured by student test scores, resulting from a program of (zero-) low-stakes peer evaluation. Teachers working at the same school observed and scored each other’s teaching. Students in randomly-assigned treatment schools scored 0.07σ higher on math and English exams (0.09σ lower-bound on TOT). Within each treatment school, teachers were further randomly assigned to roles: observer and observee. Teachers in both roles improved, perhaps slightly more for observers. The typical treatment school completed 2-3 observations per observee teacher. Variation in observations was generated partly by randomly assigning a low and high (2*low) dose of suggested number of observations. Benefits were quite similar across dose conditions.