Before you unpack the latest Leading Indicator updates on artificial intelligence (AI) in education, be sure to check out an AI survey our colleagues recently released that tracks school system leaders’ views of the technology among staff and students. Please give it a read and share with your networks!
2025 opened with a bang for AI: OpenAI, Perplexity, xAI, and Google launched “deep research” capacities for their chatbot products. Instead of responding to prompts one word at a time, these new models take time to “think” through their process step-by-step (for more detail, Ethan Mollick has you covered). And the release of DeepSeek’s R1 — an open source AI model developed by a Chinese company — sent shockwaves through tech markets for its quality and (potentially dubious) cost claims (check out Claire Zau’s great explainer on R1).
More effective and efficient models aren’t the only AI news — more AI tools with increased ability to act autonomously, or “agentic” products are entering the market. OpenAI is providing its pro subscribers with access to Operator, an AI-powered tool that can use web browsers to accomplish tasks autonomously. And Perplexity launched a mobile assistant capability on its Android app that can “see” users’ screens and complete tasks by opening and using apps.
The improved quality and growing capacities of AI tools are impressive, but they raise questions for those of us working in education. The latest AI models perform similarly to human experts on doctorate-level science and math benchmark tests. Economist Tyler Cowen thinks that the output generated by OpenAI deep research in just a few minutes is “comparable to having a good PhD-level research assistant, and sending that person away with a task for a week or two, maybe more.” In the Understanding AI newsletter, Timothy B. Lee asked a sample of his readers to try the OpenAI and Google deep research products and found that most think the OpenAI tool can produce work ranging from what an entry-level employee to an expert in their field could produce. In the dawning age of AI researchers and agents, what capacities should schools focus on cultivating in people to complement (and maybe compete) with these emerging technologies?
Now more than ever, cultivating deep reservoirs of content knowledge across multiple domains is essential. As the quality of AI outputs gains a glossier, crypto-academic veneer, domain expertise will be a critical safeguard to identify errors when hallucinations occur. AI systems might be able to draft 95% of a mandatory Securities and Exchange Commission (SEC) Form S-1 filing at an investment bank, but figuring out the final 5% still requires financial and legal experts to ensure the document complies with SEC regulations prior to being listed on a public exchange. And when essential information to answer a prompt isn’t easily accessible on the internet, AIs are blind to insights that might flow from those data points.
What does this all add up to? Knowledge still matters in the age of AI. There’s no shortcut to developing our own neural networks based on long-term memory and experience. The good news is that there’s a growing movement to support greater adoption of knowledge-rich curricula in schools. There may be a role for AI to support that work, but it must be done thoughtfully. Recent research indicates that access to ChatGPT can induce “metacognitive laziness” among students. If we want students to thrive in a world that is reshaped by ever-powerful (and ever-evolving) AI models, we must work to ensure they have the cognitive tools to do so.
Education Evolution: AI and Learning
- A summary of research conducted on an AI-supported after school program in Nigeria indicates positive impacts on student learning; Dan Meyer urges caution when interpreting those results until more details are known about the controls used in the study (which has not yet been published).
- OpenAI released a report that shows more than one-third of college-aged U.S. adults are using ChatGPT, and that education-related use cases are the most common among this population of AI users.
- Related, new research finds that 60% of the undergrads they sampled use AI on a regular basis for academic tasks and that many —– especially non-STEM students —– overestimate the capabilities of AI models.
- New research from Anthropic analyzes millions of prompts to Claude. Coding and technical writing are by far the most common use cases, but educational tasks are also more common than you might expect among the general population, including designing curricular materials and supporting differentiated instruction.
- Last month, the Christensen Institute released a report, Navigation & Guidance in the Age of AI, emphasizing that human relationships must be prioritized. Report co-author Julia Freeland Fisher summarizes the report’s findings in The 74.
- In a podcast with Diane Tavenner and Michael B. Horn, John Bailey (a partner with some of Bellwether’s AI work) argues that AI can be transformative in K-12 education in part because it democratizes access to expertise.
- In a blog for the Center on Reinventing Public Education, Morgan Polikoff, Amie Rapaport, and Nathanael Fast highlight new nationwide survey data on parents’ understanding of AI use in their childrens’ schools, finding large gaps in communication and awareness of AI policies and use.
- The 74’s Greg Toppo interviewed Khan Academy’s Kristen DiCerbo on the latest developments with Khanmigo and how AI might support conversation-based assessments.
- In an Education Next forum, John Bailey and John Warner weigh in with optimistic and skeptical takes, respectively, to Sal Khan’s newest book, “Brave New Words: How AI Will Revolutionize Education (and Why That’s a Good Thing).”
- A Pennsylvania virtual charter school application that would have included classes led by “AI tutors” was rejected by the state’s Department of Education. (This article may be the first time that the phrase “human teachers’ unions” has been used, which raises a question: Will we see AI teachers’ unions in the future?)
- Students and staff at the California State University (CSU) system are getting access to ChatGPT Edu. OpenAI claims that CSU will be “the first AI-powered university system in the United States.”
- The Chan Zuckerberg Initiative (CZI)* announced two new developer tools for education to help align AI products with learning science and state standards, as well as assessing the quality of AI tool outputs for teaching and learning.
The Latest: AI Sector Updates
- Late last month, OpenAI, Oracle, and Softbank announced a project called Stargate in a White House briefing with President Trump. The project is a $500 million partnership to build AI infrastructure in the U.S.
- The World Economic Forum’s Future of Jobs Report 2025 found that AI skills are in demand and likely to drive change: 86% of employers surveyed believe that AI is likely to drive transformation in their businesses. The finance sector may be feeling this more acutely than others in the coming years: A report from Bloomberg Intelligence forecasts that banks around the globe will cut up to 200,000 jobs as AI replaces work currently performed by humans.
- Fake news? Good Daily is a one-man shop behind 355 AI-generated “local” newsletters for small towns in 47 states across the U.S. There’s no disclosure to users about its use of AI — human author Andrew Deck digs into the story for Nieman Lab.
Pioneering Policy: AI Governance, Regulation, Guidance, and More
- The Trump Administration issued an executive order to repeal the Biden Administration’s executive order on AI.
- Vice President JD Vance’s first major speech was at an AI summit in Paris, where he called for policies that can incubate AI innovation and warned against the dangers of excessive regulation.
- The U.S. House Bipartisan Task Force on Artificial Intelligence delivered its report, which includes recommendations for K-12 that include AI literacy support for educators and enhanced investments in STEM education.
- According to Multistate, a major wave of AI legislation across the country is growing in the early months of 2025. More than 700 bills have been filed thus far across federal and state legislatures, compared to 743 in 2024 and less than 200 in 2023. If Congress does not pursue comprehensive AI legislation at the federal level, state policymakers will act to address constituent concerns, which may introduce more complex — and potentially contradictory — rules for AI companies.
- Last month, the U.S. Department of Education’s Office of Educational Technology released a report on building capacity for the implementation of AI in postsecondary education.
*Editor’s note: For a complete list of current and former clients and funders, including CZI, visit our website.