June 4, 2025

The Leading Indicator: AI in Education Issue Ten

By Alex Spurrier | Marisa Mission

Share this article

A recent New York Magazine piece details how college students are using artificial intelligence (AI) tools to “outsource” (aka cheat on) huge portions of their assignments. The article immediately sparked a predictable cycle of moral panic in postsecondary and K-12 circles, but here’s what the cheating panic misses: AI isn’t “breaking” education so much as it’s exposing what was already broken. When a large language model (LLM) can bang out a passable five-paragraph essay in seconds, maybe the problem isn’t the machine. Maybe it’s that we’ve been assigning work that wasn’t assessing the right skills in the first place.

Labor market signals should terrify the education sector more than any cheating statistics. LinkedIn’s Chief Economic Opportunity Officer Aneesh Raman laid out the data in a recent op-ed: 30% higher unemployment among recent college graduates since late 2022, with 63% of executives expecting AI to take over entry-level tasks that once served as career on-ramps. As those rungs on career ladders disappear, students furthest from opportunity will fall first.

So, instead of playing whack-a-mole with cheating detection software, policymakers should confront bigger questions of purpose: What knowledge is worth mastering, and why should every student pursue it?

Students still need a coherent body of knowledge — scientific facts, historical narratives, literary texts — that enables critical thinking and democratic participation. AI can retrieve data, but it cannot replace the mental map that lets citizens judge what’s grounded in facts. But knowledge alone isn’t enough to create engaged, thoughtful citizens; they also need values such as intellectual curiosity about truth, humility about one’s limits, perseverance through difficulty, and integrity in one’s work. These are the truly human components that add value when machines handle the easy thinking.

Moving from these principles to practice is easier said than done. Standards must shift, curricula must be overhauled, and assessments will need to become more complex — perhaps through portfolios, version-tracked drafts, and oral defenses — not just one-shot essays a chatbot can conjure. Even if policymakers could make these changes in short order, the Common Core experience should remind us that efforts like this can go off the rails pretty quickly. On top of those political scars, the Mahmoud v. Taylor case, where Maryland parents seek to exempt children from LGBTQ-themed storybooks, signals the struggle ahead.

The moment schools claim a moral stance, critics will ask whose morality it is. Some families will press for charter schools, education savings accounts, or opt-out clauses; others will insist that selective exemptions undermine the shared knowledge a democracy requires. To navigate the minefield of parental rights and technological advancements, districts need clear policies on educational objectives first.

The AI revolution in education isn’t optional. The only question is whether schools will lead the transition or be dragged through it. Students leaning on ChatGPT are behaving rationally in a system that rewards completion over comprehension. If we want different choices, we need different incentives — ones that prize enduring knowledge, visible virtues, and work no machine can steal. The cheating panic is just a warning light; real repair means rebuilding the engine.

Education Evolution: AI and Learning

As another academic year winds down, it’s worth stepping back to appreciate how different the use of AI in schools is compared to just a year or two ago. Survey data indicates that the rate of teen use of ChatGPT for schoolwork doubled from 13% in 2022 to 26% in 2024 — usage that’s likely even higher this year. And a representative survey indicates that nearly half of school districts have trained their teachers on AI use by the end of the 2024-25 school year (SY), double the rate from SY23-24. 

Florida’s Miami-Dade County Public Schools, the nation’s third-largest school district, is an example of this rapid shift. It moved from a ban on AI chatbots two years ago to rolling out Google’s Gemini chatbot to more than 1,000 educators and 100,000 high school students. 

As the use of AI continues to grow in K-12, students are expressing both optimism and concern; teachers share a similarly conflicted position on AI technology in schools. Before SY25-26 begins in a few months, it’s imperative that school leaders invest more in people, capacity, and infrastructure to ensure that as AI technology continues to evolve, educators and students are equipped to adapt to new challenges and opportunities. 

In other news:

The Latest: AI Sector Updates

In May, Anthropic launched Claude 4 for both Sonnet and Opus, the latter of which it hailed as “the world’s best coding model.” Together, Anthropic boasted that they “are a large step toward the virtual collaborator” (in other words, your new AI teammate). Yet it turns out that Claude 4 has the capability to be your worst teammate: In pre-release safety tests, Opus 4 resorted to blackmailing its engineers to prevent being replaced with another AI model. Claude 4 was ultimately released with extra safeguards, but those may or may not be all that reassuring — after all, those safeguards were specifically developed for “AI systems that substantially increase the risk of catastrophic misuse.” 

In other news:

Pioneering Policy: AI Governance, Regulation, Guidance, and More

As of May 2025, Multistate.ai has tracked more than 1,000 bills that have been introduced in state governments to regulate various parts of the AI industry or AI usage. Without comprehensive federal AI regulation, state policymakers are addressing their constituents’ concerns (as is their job). Yet all 1,000-plus of those bills could be swept away if the U.S. Senate approves the current reconciliation bill, which includes a provision to establish the federal government as the sole authority to regulate AI and prohibit states from passing AI-related laws for the next decade. The bill, which has already passed the House of Representatives, now sits with the Senate for potential approval in June. House Republican authors of the provision argue that it will preempt a patchwork of inconsistent state regulations that could block AI innovation and adoption, leaving the U.S. at a disadvantage in the growing AI race against China. 

Critics, however, are alarmed that states would not be able to pass additional protections for their constituents, and see the move as a capitulation to major tech companies’ desires. In Congressional hearings with the House Tech and Commerce Subcommittee, both OpenAI’s and Google’s CEOs emphasized the need to minimize regulations that could harm AI innovation. These statements also follow the priorities expressed in several major tech companies’ responses to the federal government’s March request for information on an AI action plan. 

In all the buzz around the reconciliation bill’s spending cuts, debt limit increase, impact on Medicaid, and more — the “big, beautiful bill” is more than 1,000 pages itself — this AI provision may not have hit many people’s radars. However, if it passes, the long-term consequences could change our lives entirely in areas such as education, the workforce, the economy, privacy protections, judicial processes, consumer protections, and more. It’s a provision worth watching.

In other news:

More from this topic

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
ErrorHere