- Megan Kuhfeld
Search EdWorkingPapers by author, title, or keywords.
Noncognitive constructs such as self-efficacy, social awareness, and academic engagement are widely acknowledged as critical components of human capital, but systematic data collection on such skills in school systems is complicated by conceptual ambiguities, measurement challenges and resource constraints. This study addresses this issue by comparing the predictive validity of two most widely used metrics on noncogntive outcomes|observable academic behaviors (e.g., absenteeism, suspensions) and student self-reported social and emotional learning (SEL) skills|for the likelihood of high school graduation and postsecondary attainment. Our findings suggest that conditional on student demographics and achievement, academic behaviors are several-fold more predictive than SEL skills for all long-run outcomes, and adding SEL skills to a model with academic behaviors improves the model's predictive power minimally. In addition, academic behaviors are particularly strong predictors for low-achieving students' long-run outcomes. Part-day absenteeism (as a result of class skipping) is the largest driver behind the strong predictive power of academic behaviors. Developing more nuanced behavioral measures in existing administrative data systems might be a fruitful strategy for schools whose intended goal centers on predicting students' educational attainment.
Four-day school weeks are becoming increasingly common in the United States, but their effect on students’ achievement is not well-understood. The small body of existing research suggests the four-day schedule has relatively small, negative average effects (~-0.02 to -0.09 SD) on annual, standardized state test scores in math and reading, but these studies include only a single state or are limited by using district-level data. We conduct the first multi-state, student-level analysis that estimates the effect of four-day school weeks on student achievement and a more proximal measure of within-year growth using NWEA MAP Growth assessment data. We conduct difference-in-differences analyses to estimate the effect of attending a four-day week school relative to attending a five-day week school. We estimate significant negative effects of the schedule on spring reading achievement (-0.07 SD) and fall-to-spring achievement gains in math and reading (-0.06 SD in both). The negative effects of the schedule are disproportionately larger in non-rural schools than rural schools and for female students, and they may grow over time. Policymakers and practitioners will need to weigh the policy’s demonstrated negative average effects on achievement in their decisions regarding how and if to implement a four-day week.
The COVID-19 pandemic has been a seismic and on-going disruption to K-12 schooling. Using test scores from 5.4 million U.S. students in grades 3-8, we tracked changes in math and reading achievement across the first two years of the pandemic. Average fall 2021 math test scores in grades 3-8 were .20-27 standard deviations (SDs) lower relative to same-grade peers in fall 2019, while reading test scores decreased by .09-.18 SDs. Achievement gaps between students in low-poverty and high-poverty elementary schools grew by .10-.20 SDs, primarily during the 2020-21 school year. Observed declines are more substantial than during other recent school disruptions, such as those due to natural disasters.
Nearly one in five U.S. students attends a rural school, yet we know very little about achievement gaps and academic growth in rural schools. This study leverages a unique dataset that includes longitudinal test scores for more than five million 3rd to 8th grade students in approximately 17,000 public schools across the 50 states, including 900,000 students attending 4,727 rural schools. We find rural achievement and growth to be slightly above public schools. But there is considerable heterogeneity by student race/ethnicity. For all grades and subjects, White-Black and White-Hispanic gaps are smaller in rural schools than gaps nationwide, and White-Native American gaps are larger in rural schools than gaps nationwide. Separate analyses by racial/ethnic subgroup show that rural Black, Hispanic, and Native American students are often growing slower than their respective subgroup national average. In contrast, White students are often growing faster than the national average for White students.
This study presents a framework that uses academic trajectories in the middle grades for identifying students in need of intervention and providing targeted support. We apply a set of academic college readiness benchmarks to rich longitudinal data for more than 360,000 students in 5900 schools across 49 states and the District of Columbia. In both math and reading, each student was assessed up to six times (fall and spring of 6th, 7th, and 8th grade). We show that student-level and school-level demographic characteristics significantly predict academic trajectories. Compared to White and Asian students, higher proportions of Black and Hispanic student are consistently off-track for college readiness throughout middle school. Among students who started 6th grade on track, being male, Black, Hispanic, and attending schools with a higher percentage of students who are eligible for free or reduced-price lunch are positively associated with falling off track.
With 55 million students in the United States out of school due to the COVID-19 pandemic, education systems are scrambling to meet the needs of schools and families, including planning how best to approach instruction in the fall given students may be farther behind than in a typical year. Yet, education leaders have little data on how much learning has been impacted by school closures. While the COVID-19 learning interruptions are unprecedented in modern times, existing research on the impacts of missing school (due to absenteeism, regular summer breaks, and school closures) on learning can nonetheless inform projections of potential learning loss due to the pandemic. In this study, we produce a series of projections of COVID-19-related learning loss and its potential effect on test scores in the 2020-21 school year based on (a) estimates from prior literature and (b) analyses of typical summer learning patterns of five million students. Under these projections, students are likely to return in fall 2020 with approximately 63-68% of the learning gains in reading relative to a typical school year and with 37-50% of the learning gains in math. However, we estimate that losing ground during the COVID-19 school closures would not be universal, with the top third of students potentially making gains in reading. Thus, in preparing for fall 2020, educators will likely need to consider ways to support students who are academically behind and further differentiate instruction.
Important educational policy decisions, like whether to shorten or extend the school year, often require accurate estimates of how much students learn during the year. Yet, related research relies on a mostly untested assumption: that growth in achievement is linear throughout the entire school year. We examine this assumption using a data set containing math and reading test scores for over seven million students in kindergarten through 8th grade across the fall, winter, and spring of the 2016-17 school year. Our results indicate that assuming linear within-year growth is often not justified, particularly in reading. Implications for investments in extending the school year, summer learning loss, and racial/ethnic achievement gaps are discussed.
The belief that additional time allows children to become more ready for school has affected public policy and individual practices. Prior studies estimated either associations between school entry age and academic growth or causal effects on achievement measured at one or two points. This paper contributes novel causal evidence for the impacts of kindergarten entry age on academic growth in the first three years of school. We embed regression discontinuity into a piecewise multilevel growth model and apply it to rich assessment data from three states. Being a year older leads to higher initial achievement and higher kindergarten growth rates but lower growth rates during 1st and 2nd grades. Effects do not differ by gender or race.
A huge portion of what we know about how humans develop, learn, behave, and interact is based on survey data. Researchers use longitudinal growth modeling to understand the development of students on psychological and social-emotional learning constructs across elementary and middle school. In these designs, students are typically administered a consistent set of self-report survey items across multiple school years, and growth is measured either based on sum scores or scale scores produced based on item response theory (IRT) methods. While there is great deal of guidance on scaling and linking IRT-based large-scale educational assessment to facilitate the estimation of examinee growth, little of this expertise is brought to bear in the scaling of psychological and social-emotional constructs. Through a series of simulation and empirical studies, we produce scores in a single-cohort repeated measure design using sum scores as well as multiple IRT approaches and compare the recovery of growth estimates from longitudinal growth models using each set of scores. Results indicate that using scores from multidimensional IRT approaches that account for latent variable covariances over time in growth models leads to better recovery of growth parameters relative to models using sum scores and other IRT approaches.
Survey respondents use different response styles when they use the categories of the Likert scale differently despite having the same true score on the construct of interest. For example, respondents may be more likely to use the extremes of the response scale independent of their true score. Research already shows that differing response styles can create a construct-irrelevant source of bias that distorts fundamental inferences made based on survey data. While some initial studies examine the effect of response styles on survey scores in longitudinal analyses, the issue of how response styles affect estimates of growth is underexamined. In this study, we conducted empirical and simulation analyses in which we scored surveys using item response theory (IRT) models that do and do not account for response styles, and then used those different scores in growth models and compared results. Generally, we found that response styles can affect estimates of growth parameters including the slope, but that the effects vary by psychological construct, response style, and model used.