The Department of Education solicited feedback on their draft regulations for the Every Student Succeeds Act (ESSA). The draft regulations would clarify that, under ESSA, states must issue “summative ratings” for each of their schools. Although there’s been push back against that requirement on a few fronts, I honestly don’t understand it. In my comment on the draft regulations, I offered six reasons why states should issue summative ratings for their schools:
1. Summative ratings are all around us. Take movies. If you want to go to a movie, you might consult a site like IMDb or Rotten Tomatoes. Or, you might prefer to watch Oscar-nominated films that reflect critics’ picks for the best movies each year. If you have kids, you might want to know what a movie is rated in terms of its appropriateness for children. Or, if you want to purchase a movie, you can go to Amazon to see their 5-star rating system or read individual reviews.
Cars, colleges, neighborhoods, restaurants, etc. You name it, if there’s some sort of choice that people can make, there’s probably at least one, if not more than one, rating system to help them decide. Even the National Education Association, which opposes the idea of rating schools, has its own A-F grading systems for individual legislators. And just this week the American Federation of Teachers, which also opposes school rating systems, praised a report relying on an A-F grading system.
2. Summative ratings are popular. It’s not just that summative ratings exist; they’re also extremely popular. Consumer Reports is an entire magazine devoted to rating everyday household products, and it’s been around since 1936 for a reason. Movie producers and studio executives desperately want their movies to earn Oscar nominations, because it means more people will go see their movies (there are Oscars awarded in 24 categories, but the overall Academy Award for Best Picture is the award that drives the most consumer behavior). Hollywood is now more concerned with website rankings than it is with individual critics, because movie-goers are more likely to respond to the numeric ratings. And in the education sector, colleges compete to earn high rankings from magazine publishers, and those “Best of” issues tend to sell huge numbers of magazines as people clamor for information about college quality.
Similarly, if parents want information about local schools, they might go to GreatSchools.org. GreatSchools compiles test score information into a uniform, 1-10 rating scale, and it has over 50 million unique visitors a year, which suggests there’s huge demand for information about public schools.
These examples all illustrate the need to help people make sense of the world. While choice, in itself, is a good thing, people tend to get overwhelmed by unlimited options. At some point, there’s a limit to how much information people can process, and they turn to trusted rating systems to help guide them in their decisions.
3. Summative ratings are simple and easy to understand, but they’re not one-dimensional. All of the rating systems mentioned above have various factors that go into them (in education-speak, we might say they’re based on “multiple measures”). And, while the overall rating provides a useful method for people to make decisions, none of these systems stop at a numeric rating. They all include much more information for people who want to dig in further. Amazon and GreatSchools, for example, pair their rating systems with more detailed consumer reviews. Consumer Reports and the college ranking systems offer different rankings for people who might prioritize different things. Inevitably, there’s no one “best” car for everyone, and there will never be one “best” school for all kids, but that doesn’t mean we should throw up our hands and give up in trying to help families weigh their options.
4. If states don’t rate their schools, someone else will. There’s already tremendous public demand for sites like GreatSchools and SchoolGrades.org. But those rating sites are based primarily on student proficiency rates on annual standardized tests in reading and math. They don’t measure how much students gain from year-to-year, how students are doing in other subjects besides reading and math, or any other measure of student success. And although SchoolGrades.org attempts to control for what type of students the school enrolls, the ratings at GreatSchools are more a reflection of student demographics than school quality.
Even before these sites, parents used informal networks or realtors to find the “best” schools for their children, which was often code for the “best” neighborhoods. So even if states would prefer not to define what a “good school” is, someone else will. It would be better for states to do it, and do it well, than to leave those definitions entirely up to the private market.
5. ESSA’s authors clearly envisioned states creating summative ratings. It’s weird this has to be said at all, but some congressional leaders now deny that ESSA requires summative ratings. These assertions defy any rational reading of the law, and they deny the history of why the law was written the way it was.
First, as I wrote earlier this year, ESSA, “distinguishes between ‘report cards,’ where states must compile and release a wide array of data on school performance and finances, and a ‘statewide accountability system’ with a ‘state-determined methodology’ to identify low-performing schools.” In requiring states to identify certain categories of schools for “comprehensive” and “targeted” support based on their accountability systems, ESSA depends upon some sort of summative rating system (which could be A-F grades, a star or numeric system, some sort of color-coding, etc.).
Second, ESSA includes specific rules about how states must weight various factors that go into their accountability systems. In late November, just days before ESSA was completed, a “Republican aide” speculated about very specific percentages and whether they would satisfy the requirement. Why would ESSA impose rules on state rating systems if states weren’t required to have rating systems at all? A more logical reading says that ESSA includes these rules because its authors were worried about how states would design such systems, not whether they would have them at all.
6. Summative ratings force schools to improve. If there’s one thing that’s clear from 13 years of No Child Left Behind, it’s that schools respond to external accountability pressures. They sometimes respond in unhelpful ways, of course, so the challenge is to design accountability systems that encourage schools to focus on measures that truly matter (which is all the more reason states should be involved).
In my piece from January, I cited several studies on accountability which all came to similar conclusions: School rating systems can help focus schools’ attention on the students who need the most help, and those efforts lead to improved short- and long-term outcomes. Since then, I’ve come across even more studies making that point. When a newspaper in Netherlands began publishing school rankings, student performance improved. After schools in England earned “fail” ratings, school performance improved across a number of core subjects.
On the opposite end, there’s evidence that ending accountability policies can actually do harm by taking away pressure from low-performing schools to improve. When Wales dropped its school rating system, student achievement dropped significantly, particularly for lower-performing students. Similarly, after New York City dropped its A-F rating system and stopped applying pressure on low-performing schools, achievement in F-rated schools immediately fell.
States have lots of leeway about how they design their school rating systems, but there shouldn’t be any question on whether they should. Read my suggestions for how states can design smart school ratings systems to inform parents and guide local school leaders in productive ways.
August 5, 2016
Summative Ratings Are All Around Us. Why Are We Afraid of Them in K-12 Education?
By Bellwether
Share this article