January 21, 2015

What’s Behind Door #3? The Giant Local Testing Loophole in Alexander’s ESEA Proposal

By Bellwether

Share this article

There’s been no shortage of column inches devoted to testing and the “choose your own adventure” approach in Sen. Lamar Alexander’s draft to rewrite the Elementary and Secondary Education Act. And annual testing will likely dominate the discussion at the first Senate hearing on reauthorization today, even though many (including key witnesses, like Brookings’ Marty West) have already shown why backing away from annual testing is a horrible plan.
But annual testing is only half the story. That’s because Alexander’s bill doesn’t just offer two statewide testing options for policymakers to fight about. It also offers a separate testing option for districts on top of the state choices. And although education wonks are up-in-arms over the merits of door #1 vs. door #2 for states, most have, unfortunately, ignored the giant local testing loophole that is behind door #3.
Through it, districts could opt-out of statewide testing and use their own tests instead, regardless of whether Congress chooses door #1 or door #2. But the real kicker is that this loophole isn’t actually new at all. Alexander’s draft bill just makes it far easier for districts to take advantage of–and abuse–existing flexibility. 
Districts would only need state approval that their local assessments meet the same federal requirements with which state tests comply. And given the increasing number of districts pushing back on state testing, door #3 would be an irresistible option for many, even as it undermines the comparability of data between schools for evaluation and accountability; states’ abilities to provide technical assistance, support, and professional development to districts; and state investments in new assessment systems aligned to college- and career-ready standards.

Let me wonk out explain: Federal Title I assessment regulations enacted just after NCLB’s passage already allow for locally-designed assessments. They just set a really, really high bar to use them in place of state tests–as one Department official tried to explain to a skeptical Senator Hillary Clinton in a 2002 hearing (after all, wasn’t the point of NCLB “to have some coherent testing system that allowed comparisons?”). The regs stipulate that states must establish technical criteria to review and approve local tests and demonstrate that they are:

  1. “equivalent” to one another and to state tests “in their content coverage, difficulty, and quality,”
  2. “have comparable validity and reliability,” and
  3. give “unbiased, rational, and consistent determinations of the annual progress of schools” so that the state can use them to make annual determinations for accountability.

Then, the Secretary, through the federal peer review process of states’ assessments, gets the final say over whether these testing systems pass muster.
No locally-developed assessment system (that I can find) has ever met those requirements, even though states like Nebraska tried for years. Local test flexibility and statewide test comparability appear to be a policy unicorn. For instance, in 2005, “independent analyses conducted…. indicate that the four performance levels adopted by Nebraska cannot be accurately reported.” Two years later, the state couldn’t show that their review of local tests fully evaluated the “technical quality of the local assessments (including validity and reliability), alignment of those assessments with grade-level academic content standards, and inclusion of all students in the grades assessed.” In total, the seven missives between the Department and state officials demonstrate just how lacking Nebraska’s local tests–and the systems for monitoring them–were in providing the key functions of student assessment required in law.
This is what makes Alexander’s locally-designed testing provision such an awful idea. It’s not just that the provision is included in the draft bill–it’s already in Title I regs. It’s that the bill would also eliminate oversight of the process, pretending that local assessments are easily cobbled into a system that stacks up equally with statewide tests when the truth is that it’s incredibly difficult and never been done before.
These critical requirements for the quality of assessments, like so many other federal safeguards, become unenforceable in the Alexander draft. State testing plans, including the process for approving district opt-outs, would be submitted via assurances (rather than with actual evidence). Further, the Department couldn’t regulate any criteria that defines, specifies, or prescribes the standards states use to develop their assessments. In effect, this guts federal peer review of state (and local) testing systems, despite the fact that everyone agrees low-quality tests are feeding the testing backlash. (For proof, just look at the range of groups supporting the Bonamici/Baldwin SMART Act that proposes auditing state assessments and reducing redundant, low-quality tests.)
Local assessments are not prohibited under current law. What’s prohibited is letting districts run amok and opt-out without proving that they’ve, in fact, completed the quixotic task of creating tests that are high-quality, valid, reliable, and comparable to state ones. At a time when states and districts are exploring new ways to assess students’ college and career readiness and looking to improve the quality, content, coverage, and delivery of these tests, Alexander’s proposal is a reckless, unnecessary, and irresponsible way to encourage their development.

More from this topic

Thank you! Your subscription has been confirmed. You'll hear from us soon.