A recent Mathematica study found that new PARCC assessments were statistically no better at predicting which students were ready for college than Massachusetts’ old student assessments (called MCAS). Both tests were slightly superior to the SAT at identifying which students would be successful in college-level courses, but the study should prod all states’ thinking in a few ways:
1. Should we keep trying to build a better mousetrap? If $186 million and five years’ work at PARCC* can’t design something much better than what Massachusetts developed on its own in 1993, why continue this race?
2. If states pick a random test from the cheapest assessment vendor, will they get results even as good as what we’re seeing here? The study also found that cut scores matter. Although the two tests produced results that were statistically indistinguishable, the PARCC cut score did send a stronger signal than the cut score Massachusetts had been using on MCAS. To examine how that’s playing out for their students, all states should be doing studies like this one–the other federally funded assessment consortia, Smarter Balanced, should be as well. I suspect the results would be no matter or perhaps worse than what Mathematica found in Massachusetts. different. If even the highest-quality standards and the best tests we have at the K-12 level don’t tell us that much about readiness for college, what chance do individual states have of coming up with something better?
3. Related to #2, should states still be coming up with their own high school achievement tests? Why don’t more states opt to use the SAT or ACT** as their high school accountability tests? This study found that PARCC and MCAS had slightly more predictive value than the SAT, but there are trade-offs. The SAT and the ACT are older, shorter, and cheaper than what states typically offer, plus they’re familiar to parents and, unlike a given state’s assessment, SAT and ACT scores are accepted by colleges and universities all across the country. The ACT and SAT have proven themselves to be useful, if slightly flawed measures of college-readiness. Why do states think they can do better?
4. How much should we value alignment between K-12 academic standards and tests? One objection to just using the SAT or ACT for official state purposes is that they’re not aligned to each state’s academic content standards. But so what? Both PARCC and MCAS are closely aligned to Massachusetts’ state academic standards, but neither one is all that closely aligned to actual student outcomes at Massachusetts colleges and universities.
5. If there’s only a moderate relationship between high school test scores and first-year college GPA (let alone longer-term GPA or college completion rates), why do we keep our sole reliance on these tests for accountability purposes? I happen to have a whole paper on this topic, but this study is yet another reminder that if states care about college-readiness, they need to be tracking actual college outcomes, not just test scores.
*Disclosure: The PARCC consortia is a Bellwether client, but I am not involved in the work.
**ACT is a former Bellwether client. It was not part of the Mathematica study but it has very similar correlations to first-year college outcomes.
October 25, 2015
Should Massachusetts PARCC the MCAS? Plus 5 Questions for Other States.
By Bellwether
Share this article