The idea of holding schools accountable for students’ performance has stood at the center of Democrats’ and Republicans’ school-reform efforts in the United States for more than two decades—and has sparked significant debate. One of the many questions that have been raised is whether accountability efforts could backfire by driving good teachers out of poorly rated schools, creating a vicious cycle for principals attempting to turn their institutions around.
Chicago Booth’s Rebecca Dizon-Ross finds that sometimes the reverse is true: in certain circumstances, a bad accountability mark for a school decreases the likelihood that a teacher will leave, and even leads to more good teachers joining.
Dizon-Ross analyzed data from New York City’s public-school system, which began assigning school-accountability ratings in 2007. A–F are assigned every November, based on measurements taken the previous school year, including from standardized test scores, attendance, and student and parent surveys. Schools that earn As and Bs win extra funding, whereas schools rated C and below face serious sanctions, including potential closure. Teachers are neither punished nor rewarded for the ratings their schools receive.
Dizon-Ross examined the impact of one year’s ratings on teacher turnover the following summer, looking in particular at schools that hovered at the cutoff points between two ratings. That allowed her to get a vivid estimate of the effect of receiving a lower rating, since those schools looked similar otherwise, on paper. She finds that schools that just missed out on a D and instead got an F, or schools that just missed a C and instead were awarded a D, saw a 20 percent decline in baseline teacher turnover.
When school-accountability ratings affect teacher turnover
At schools with accountability grades on a scale of A–F, teacher retention among schools receiving lower grades is stronger when the schools are just below the cutoff.
She also assessed teacher quality by using a “value-added” measure that essentially looked at each teacher’s contributions to her students’ year-on-year test-score gains. She concludes that at the schools that just missed the higher rating, high-quality teachers were no more likely to leave, while teachers who joined were better, on average.
All these results only apply to schools that just missed a C or D rating. In contrast, at schools hovering around the dividing line between A and B, or B and C, there were no benefits for the schools that narrowly missed the higher rating. In fact, the analysis suggests the quality of teachers joining the lower-rated schools was below that of those joining the higher-rated schools.
Dizon-Ross considered the possible mechanisms driving teachers’ job moves. For example, why did turnover fall at the lower-rated schools at the bottom end of the performance spectrum—the schools that nearly received Ds and Fs? One explanation is that a teacher could be stigmatized by a school’s rating and therefore less able to land a new job. But the greater number of high-quality teachers willing to face potential stigma by entering a low-rated school contradicts that theory. Instead, Dizon-Ross hypothesizes, low ratings carry serious consequences, such as school closure, and put pressure on principals to make improvements—and teaching at an improving school is a rewarding experience, enough so that good teachers are more likely to stay and other good teachers are more likely to join.
This doesn’t happen at higher-performing schools because New York City doesn’t punish schools for getting a B rating rather than an A, meaning principals feel little pressure to make changes for the better. Here, New York’s accountability system might not be inducing positive changes, with lower scores at best having no effect on turnover and at worst damping a school’s ability to recruit strong teachers. “At the top . . . accountability pressures were not strong enough to spur positive changes,” says Dizon-Ross.