A recent New York Magazine piece details how college students are using artificial intelligence (AI) tools to “outsource” (aka cheat on) huge portions of their assignments. The article immediately sparked a predictable cycle of moral panic in postsecondary and K-12 circles, but here’s what the cheating panic misses: AI isn’t “breaking” education so much as it’s exposing what was already broken. When a large language model (LLM) can bang out a passable five-paragraph essay in seconds, maybe the problem isn’t the machine. Maybe it’s that we’ve been assigning work that wasn’t assessing the right skills in the first place.
Labor market signals should terrify the education sector more than any cheating statistics. LinkedIn’s Chief Economic Opportunity Officer Aneesh Raman laid out the data in a recent op-ed: 30% higher unemployment among recent college graduates since late 2022, with 63% of executives expecting AI to take over entry-level tasks that once served as career on-ramps. As those rungs on career ladders disappear, students furthest from opportunity will fall first.
So, instead of playing whack-a-mole with cheating detection software, policymakers should confront bigger questions of purpose: What knowledge is worth mastering, and why should every student pursue it?
Students still need a coherent body of knowledge — scientific facts, historical narratives, literary texts — that enables critical thinking and democratic participation. AI can retrieve data, but it cannot replace the mental map that lets citizens judge what’s grounded in facts. But knowledge alone isn’t enough to create engaged, thoughtful citizens; they also need values such as intellectual curiosity about truth, humility about one’s limits, perseverance through difficulty, and integrity in one’s work. These are the truly human components that add value when machines handle the easy thinking.
Moving from these principles to practice is easier said than done. Standards must shift, curricula must be overhauled, and assessments will need to become more complex — perhaps through portfolios, version-tracked drafts, and oral defenses — not just one-shot essays a chatbot can conjure. Even if policymakers could make these changes in short order, the Common Core experience should remind us that efforts like this can go off the rails pretty quickly. On top of those political scars, the Mahmoud v. Taylor case, where Maryland parents seek to exempt children from LGBTQ-themed storybooks, signals the struggle ahead.
The moment schools claim a moral stance, critics will ask whose morality it is. Some families will press for charter schools, education savings accounts, or opt-out clauses; others will insist that selective exemptions undermine the shared knowledge a democracy requires. To navigate the minefield of parental rights and technological advancements, districts need clear policies on educational objectives first.
The AI revolution in education isn’t optional. The only question is whether schools will lead the transition or be dragged through it. Students leaning on ChatGPT are behaving rationally in a system that rewards completion over comprehension. If we want different choices, we need different incentives — ones that prize enduring knowledge, visible virtues, and work no machine can steal. The cheating panic is just a warning light; real repair means rebuilding the engine.
Education Evolution: AI and Learning
As another academic year winds down, it’s worth stepping back to appreciate how different the use of AI in schools is compared to just a year or two ago. Survey data indicates that the rate of teen use of ChatGPT for schoolwork doubled from 13% in 2022 to 26% in 2024 — usage that’s likely even higher this year. And a representative survey indicates that nearly half of school districts have trained their teachers on AI use by the end of the 2024-25 school year (SY), double the rate from SY23-24.
Florida’s Miami-Dade County Public Schools, the nation’s third-largest school district, is an example of this rapid shift. It moved from a ban on AI chatbots two years ago to rolling out Google’s Gemini chatbot to more than 1,000 educators and 100,000 high school students.
As the use of AI continues to grow in K-12, students are expressing both optimism and concern; teachers share a similarly conflicted position on AI technology in schools. Before SY25-26 begins in a few months, it’s imperative that school leaders invest more in people, capacity, and infrastructure to ensure that as AI technology continues to evolve, educators and students are equipped to adapt to new challenges and opportunities.
In other news:
- It’s always good to dig deeper on bold claims, as Benjamin Riley does by highlighting problems with early studies of AI’s impact.
- Rebecca Winthrop, senior fellow and director at the Brookings Institution, joined the Class Disrupted podcast in a “pre-mortem” on AI in education, identifying potential risks that should be addressed.
- Seniors at New Jersey’s North Star Academy Charter School Of Newark are taking an AI-focused class that builds their understanding of AI tools and potential career options.
- Economist Tyler Cowen frames the AI-fueled cheating epidemic in higher education with optimism, viewing it as a catalyst to force institutional evolution.
The Latest: AI Sector Updates
In May, Anthropic launched Claude 4 for both Sonnet and Opus, the latter of which it hailed as “the world’s best coding model.” Together, Anthropic boasted that they “are a large step toward the virtual collaborator” (in other words, your new AI teammate). Yet it turns out that Claude 4 has the capability to be your worst teammate: In pre-release safety tests, Opus 4 resorted to blackmailing its engineers to prevent being replaced with another AI model. Claude 4 was ultimately released with extra safeguards, but those may or may not be all that reassuring — after all, those safeguards were specifically developed for “AI systems that substantially increase the risk of catastrophic misuse.”
In other news:
- OpenAI announced a partnership with former Apple designer Jony Ive, previewing the development and release of an AI device they hope might change our lives the way the iPhone did.
- IBM’s CEO announced that the company has used AI to replace hundreds of workers, but that it has also allowed them to re-invest in more “human-facing” roles to create new jobs. The majority of staff reductions have occurred in HR functions, while the new jobs are in more human-facing or critical thinking divisions such as marketing, sales, and computer programming.
- Meanwhile, the San Francisco Standard reported entry-level tech hiring has fallen nearly 50%, and that new graduates face a similar job landscape to those in the 1980s manufacturing industry — one that’s shrinking quickly.
- Microsoft released a new report predicting that in the age of AI, successful companies must be “frontier firms” — companies “structured around on-demand intelligence and powered by ‘hybrid’ teams of humans and agents that allow companies to scale rapidly, operate with agility, and generate value faster.”
- As more and more organizations look to adopt AI or train their staff on it, one trending piece of advice is to create what Coursera Co-Founder Andrew Ng calls “sandboxes” for innovation, where employees can safely experiment with minimal repercussions. He joins Ethan Mollick, a professor at the Wharton School of the University of Pennsylvania, who advocated for similar “labs” back in 2023, and adds to it an emphasis on leadership and “the crowd” in 2025.
- For the first time in its 22-year history, Google searches are falling in Safari, likely due to increased AI chatbot use. The metric confirms that as AI feeds people information, they are less inclined to visit websites or search directly in a browser, jeopardizing a major source of revenue for both independent publishers and Apple itself.
- The Chicago Sun-Times and Philadelphia Inquirer each posted a summer reading list with fake, AI-hallucinated books by real authors. A spokesperson for the Sun-Times said it had been a licensed insert from King Features, a subsidiary of the magazine company Hearst, and consequently had not been subject to the paper’s editorial review.
- A new paper analyzing the political slant of LLMs finds most models are perceived as ideologically left-leaning. A dashboard of the paper’s data shows that “school vouchers” were perceived as the least-slanted topic tested; “DEI programs” were the most slanted topic.
Pioneering Policy: AI Governance, Regulation, Guidance, and More
As of May 2025, Multistate.ai has tracked more than 1,000 bills that have been introduced in state governments to regulate various parts of the AI industry or AI usage. Without comprehensive federal AI regulation, state policymakers are addressing their constituents’ concerns (as is their job). Yet all 1,000-plus of those bills could be swept away if the U.S. Senate approves the current reconciliation bill, which includes a provision to establish the federal government as the sole authority to regulate AI and prohibit states from passing AI-related laws for the next decade. The bill, which has already passed the House of Representatives, now sits with the Senate for potential approval in June. House Republican authors of the provision argue that it will preempt a patchwork of inconsistent state regulations that could block AI innovation and adoption, leaving the U.S. at a disadvantage in the growing AI race against China.
Critics, however, are alarmed that states would not be able to pass additional protections for their constituents, and see the move as a capitulation to major tech companies’ desires. In Congressional hearings with the House Tech and Commerce Subcommittee, both OpenAI’s and Google’s CEOs emphasized the need to minimize regulations that could harm AI innovation. These statements also follow the priorities expressed in several major tech companies’ responses to the federal government’s March request for information on an AI action plan.
In all the buzz around the reconciliation bill’s spending cuts, debt limit increase, impact on Medicaid, and more — the “big, beautiful bill” is more than 1,000 pages itself — this AI provision may not have hit many people’s radars. However, if it passes, the long-term consequences could change our lives entirely in areas such as education, the workforce, the economy, privacy protections, judicial processes, consumer protections, and more. It’s a provision worth watching.
In other news:
- A recent report from the Trump administration’s Make America Healthy Again Commission was “rife” with citation errors, including to studies that don’t exist, leading many to believe that they are hallucinations indicative of generative AI use. NBC reported that the citations have since been updated, but noted that as a result some of the language in the content also changed.
- Pollster Patrick Ruffini highlights one of the scenarios from the AI in 2027 piece covered in our last issue of The Leading Indicator: the potential for AI to influence the 2028 presidential election cycle.
- The Center for Democracy and Technology assessed trends in state education agency guidance on AI, finding that while states generally agree on the potential benefits and risks, there is more to be done to provide clarity and cover important topics such as deepfakes.
- Govern For America released “A Roadmap for Building, Staffing and Scaling AI Capacity in State Government” based on interviews with state government officials, technology talent experts, and career advisers.
- As AI technology advances, a new crop of scientists are studying a wholly different question: Should AI models have rights?
- In San Francisco, Perplexity.ai’s CEO Aravind Srinivas and Fei-Fei Li, the “godmother of AI,” debated a computer scientist and author in what an audience member called a “really nuanced conversation” regarding free speech and the question, Will the truth survive Artificial Intelligence?
- The European Commission and the Organisation for Economic Co-operation and Development reviewed existing frameworks, literature, and expert opinions to create an international framework for teaching K-12 AI literacy. Stakeholders are invited to comment on the draft before it is officially released next year.