Student surveys are widely used to evaluate university teaching and increasingly adopted at the K-12 level, although there remains considerable debate about what they measure. Much disagreement focuses on the well-documented correlation between student grades and their evaluations of instructors. Using individual-level data from 19,000 evaluations of 700 course sections at a flagship public university, we leverage both within-course and within-student variation to rule out popular explanations for this correlation. Specifically, we show that the relationship cannot be explained by instructional quality, workload, grading stringency, or student sorting into courses. Instead, student grade satisfaction -- regardless of the underlying cause of the grades -- appears to be an important driver of course evaluations. We also present results from a randomized intervention with potential to reduce the magnitude of the association by reminding students to focus on relevant teaching and learning considerations and by increasing the salience of the stakes attached to evaluations for instructor careers. However, these prove ineffective in muting the relationship between grades and student scores.