August 13, 2025

The Leading Indicator: AI in Education Issue Twelve

By Alex Spurrier | Marisa Mission

Share this article

We want to hear from you! Help us celebrate Year One of The Leading Indicator by sharing your thoughts in this quick survey.

Summer 2025 has been just as busy as summer 2024 in the artificial intelligence (AI) world. In late July, the Trump administration released its long-awaited America’s AI Action Plan. Under three pillars — “Accelerate AI Innovation,” “Build American AI Infrastructure,” and “Lead in International AI Diplomacy and Security” — the plan promises to remove regulatory barriers, protect free speech and open‑source models, invest in federal research and evaluation ecosystems, streamline permitting for AI data centres and semiconductors, and export American AI models to allies while countering China’s influence. The American Enterprise Institute described the plan as “proactionary,” highlighting provisions for regulatory sandboxes that allow AI testing in controlled environments. Meanwhile, the MIT Technology Review argued that the plan sidesteps urgent issues such as immigration and federal research and development (R&D) funding. Notably missing from the plan: Aside from upskilling American workers, there isn’t a specific AI initiative related to education.

And yet, the forward momentum of the AI Action Plan coincides with a continued AI-in-education “gold rush.” The latest AI investment to launch: the National Academy for AI Instruction (NAAI), a partnership among the American Federation of Teachers (AFT), United Federation of Teachers, OpenAI, Microsoft, and Anthropic. The $23 million initiative aims to provide free workshops, online courses, and hands‑on coaching for more than 400,000 K‑12 educators across the country. AFT President Randi Weingarten says the academy will help teachers use AI “wisely and ethically” in a world where classroom budgets are shrinking. Separately, Microsoft also announced a $4 billion investment for its Elevate program, which will send AI tools and services to schools, community colleges, and nonprofits. The initiative also includes an AI Economy Institute to build a human-centered approach to AI that, in part, studies how the technology affects employment and education. And Google has committed $1 billion over three years to provide cloud credits, grants, and advanced Gemini tools to more than 100 U.S. institutions of higher education.

So, what are the AI tea leaves telling us? One way to look at the last month is through the lens of a federal government that is taking a hands-off approach to AI in K-12 and postsecondary education. Perhaps it’s reflective of the country’s decentralized education system, but either way this vacuum has created space for the largest AI companies to drive the conversation around integrating AI tools and skills into education. The NAAI partnership is the first to bring K-12 educators into the conversation; it’ll be worth watching when and how tech giants invite additional education experts (e.g., curriculum writers, teacher prep programs) to the table.

Help us bring our AI session to SXSW EDU! Click the link below to vote for our session, moderated by Marisa, on SXSW’s PanelPicker.

The Struggle is Real: Where AI Could Amplify Learning

Education Evolution: AI and Learning

Despite widespread trepidation about AI-powered ed tech tools, they continue to evolve — but to what end? OpenAI recently introduced “Study Mode” in ChatGPT, which ostensibly turns the chatbot into a Socratic-style tutor that guides students through problems step by step rather than giving answers outright. Yet the quality of Study Mode is questionable at best. One user claims to have found that rather than a sophisticated agent that has been fine-tuned or evaluated, Study Mode is merely a system prompt. And upon further testing, it caves fairly easily to a user’s request to “just give me the answer.” Comparatively, Google recently launched Guided Learning, which builds on the existing education-focused LearnLM model and was driven by a cross-disciplinary research team. And there are AI tools developed by organizations more directly focused on education (e.g., Coursemojo, Snorkl, Quill) that have found more success in building guardrails around students cheating So, despite OpenAI’s indisputable technological power, pedagogical experts are still a crucial part of building AI tools that are truly beneficial for students’ learning.

In other news:

  • Anthropic recently previewed Claude for Education integrations, which will allow students to reference readings, lecture recordings, and textbooks directly in conversations. 
  • Anthropic also launched a free AI Fluency course for learners outside of traditional postsecondary systems.
  • A Center on Reinventing Public Education (CRPE) report documents lessons learned from 18 California schools that piloted AI tools to address learning gaps, low engagement, and time constraints. CRPE found that with support, educators quickly mastered the tools and devised creative solutions, but pilots faltered when the tools weren’t aligned with instructional vision. Teachers wanted AI to preserve or deepen student–teacher relationships and valued problem‑solving over efficiency gains, but many still remained dissatisfied with current AI options, even after extensive experimentation.
  • A new study from Common Sense Media found that almost three in four teens have tried AI chatbots, with one‑third sharing personal information in those conversations — yet, 80% still prefer human friends.
  • A College Guidance Network survey of 602 U.S. parents found that two-thirds say AI is reshaping how they value a college degree, and over half fear it will shrink their children’s job prospects. 
  • A New York Times profile of Austin’s Alpha School, where K-12 students spend only two hours a day on academics — delivered via AI-powered apps — followed by project-based, life-skill–oriented workshops with adult “guides.” The model has sparked debate over its high cost (around $40,000/year), lack of third-party evidence, and philosophical questions about minimizing human-driven instruction.

The Latest: AI Sector Updates

OpenAI has done it again: Forging ahead of its peers by releasing GPT-5. However, unlike the past procession of confusingly-named models (4o, 4.5, o3, o4, etc.), GPT-5 is not a singular, better large language model (LLM). Instead, as Sam Altman previewed six months ago, it’s a system that automatically chooses from a set of OpenAI’s frontier models to best answer a user’s prompt. As Ethan Mollick writes, this might be game-changing for novice or early users who may not have been confident in how to configure AI models for different tasks or prompts. But for advanced users, it may require additional prompting to get the quality of result one might be used to, and the lack of clarity around which model is answering the prompt contributes to AI’s “black box” problem.

In other news:

  • In July, OpenAI also quietly released Agent Mode for ChatGPT, which gives the model its own “computer” to browse websites, analyze spreadsheets, create slide ware, and even shop on a user’s behalf. Agent Mode’s capabilities mirror those from Chinese company Manus.im, giving users an American option.
  • At this year’s International Mathematical Olympiad, Google DeepMind’s Gemini with DeepThink model correctly solved five of the six problems, officially earning a gold medal — something only about 8% of human contestants achieve.
  • Pew Research Center found that when Google shows an AI‑generated “overview,” users are less likely to click on traditional search results (8% versus 15% when no AI summary appears). 
  • Anthropic has been publishing a steady drumbeat of papers on AI safety, and its latest proposes a targeted transparency framework for the largest AI developers that would require them to publish secure development frameworks, release system cards summarizing safety tests, and protect whistleblowers. The company argues such measures can improve accountability without stifling innovation.
  • In a recent AI Snake Oil blog post, authors argue that despite exponential growth in published papers and R&D spending, measures of scientific “disruptiveness” and major breakthroughs are flat or declining. The authors warn that AI could exacerbate this production–progress paradox, boosting output but not true scientific progress. 
  • Popular newsletter platform Substack surveyed 2,000 of its authors on their AI use, finding that 45% use AI mostly for productivity, research, and proofreading — not to write entire pieces. AI usage skews male (55% of men versus 38% of women), and many non-AI-using authors’ top concern is over the authenticity and privacy of their content.
  • At OpenAI, Altman cautioned that conversations with ChatGPT aren’t legally protected, and users who treat it like a therapist or life coach have no doctor‑patient privilege. This carries implications for court cases since OpenAI could be compelled to disclose sensitive personal information.
  • Meta will let some job applicants use an AI assistant during coding interviews to “better reflect real‑world development and make AI cheating less effective,” as outlined in an internal memo. The company is recruiting employees to test the process but emphasized that “[h]umans talking to humans will always be part of the interview process.” 

Pioneering Policy: AI Governance, Regulation, Guidance, and More

Although the Trump administration’s AI Action Plan didn’t include a specific education plank, the U.S. Department of Education released both a Dear Colleague Letter and a draft grant priority encouraging responsible AI adoption in schools. The guidance affirms that AI‑related costs (e.g., instructional materials, AI‑enabled tutoring and career exploration tools) are allowable under federal programs as long as statutory requirements are met. The public comment period for the grant priority is open until August 20.

Meanwhile, across the Atlantic, the European Commission issued guidelines for providers of general‑purpose AI models to help them meet obligations under the European Union (EU) AI Act. The guidelines define general‑purpose AI models, clarify what constitutes a “provider” and “placing on the market,” explain exemptions for open‑source models, and outline risk‑assessment and mitigation obligations for the most advanced models. Executive Vice‑President for Tech Sovereignty, Security, and Democracy, Henna Virkkunen, said the goal is to provide legal certainty so developers can innovate confidently while ensuring AI models are safe and aligned with EU values; however, other EU leaders are now expressing hesitance to regulate AI so strictly given the U.S. and China’s lack of regulatory frameworks.

In other news:

More from this topic

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
ErrorHere