A new executive order from the Trump administration on artificial intelligence (AI) in education is yet another distinct break from the Biden administration’s approach to AI policy. Instead of the safety-focused approach of the Biden era, this new order focuses on supporting greater AI integration into K-12 education. A more proactive federal push for AI in education could foster thoughtful adoption of this evolving technology by students and educators (an area where China is also pushing). But this development isn’t happening in a vacuum; the path to positive, scalable impact will be narrow.
Let’s start with the core of the recent executive order, which establishes a task force to carry out policy that will “promote AI literacy and proficiency among Americans by promoting the appropriate integration of AI into education, providing comprehensive AI training for educators, and fostering early exposure to AI concepts and technology.” To accomplish this, the order also calls for a “Presidential Artificial Intelligence Challenge,” public-private partnerships to develop AI education resources, prioritizing the use of AI in teacher training grant programs, and boosting AI-related apprenticeships.
Some of the EO’s substance is promising. Highlighting creative applications of AI in education can illuminate the practical potential for these new tools. Supporting AI literacy among students and teachers will be essential as AI technology takes a larger role in how people get work done. And promoting AI-related apprenticeships can help students get their foot in the door of a booming industry.
To what degree, though, will these efforts support meaningful change in the relationship between schools and AI?
The answer is probably somewhere between “very little” and “not at all.” School systems across the country are grappling with persistent absenteeism and student achievement issues in the wake of the COVID-19 pandemic, a more uncertain federal funding environment as pandemic emergency relief funds expire, and implications of a flurry of Trump administration executive orders affecting K-12 policy. In this political environment, it’s unclear how much these new (as yet unfunded) federal priorities will break through the plethora of competing priorities on educators’ plates.
The coming months will be a critical junction worth paying close attention to as implementation of the EO unfolds. The Trump administration’s policies may lead to a small step forward for AI in education, but more work will be necessary to help students and educators thoughtfully use AI tools.
Education Evolution: AI and Learning
A recent article — provocatively titled, “Teachers Worry About Students Using A.I. But They Love It for Themselves” — highlights an increasing acceptance of AI from educators who are finding it useful for their own purposes despite worries about student cheating. A slew of national surveys support this emerging trend. Carnegie Learning’s “The State of AI in Education 2025” found that among educators, AI “power users” have doubled and concerns about cheating have grown. Perhaps one reason for this trend: growth in educator training amid a distinct lack of growth in AI policies. RAND recently found that roughly half (48%) of districts provided training to teachers (nearly double from fall 2023). Comparatively, Carnegie found that only 40% of districts had AI policies, and in a recent Gallup poll over half (51%) of students reported that their school didn’t have an AI policy. This notable gap outlines a need for districts and schools to expedite policies that make students and teachers feel safe about responsible AI use.
In other news:
- If you missed April’s ASU+GSV Summit, the largest ed tech conference in the world, check out reflections on key themes from Getting Smart’s Tom Vander Ark, the Center on Reinventing Public Education’s Robin Lake, and Amplify’s Dan Meyer.
- The party’s heating up in higher education as Anthropic joins OpenAI and Google Gemini in what The Verge characterized as a “bidding war” for students’ attention:
- Anthropic kicked things off by announcing Claude for Education — a specialized version of Claude designed to help universities integrate AI across teaching, learning, and administrative functions. The AI tool includes “Learning Mode,” which guides students through their thinking, Socratic seminar-style, rather than providing direct answers.
- Anthropic also analyzed Claude conversations to better understand how college students are using AI (a helpful addition to OpenAI’s February report). Notably, Anthropic flagged concerns about academic integrity and critical thinking skills, finding that “nearly half (~47%) of student-AI conversations were Direct — that is, seeking answers or content with minimal engagement.”
- Meanwhile, Google announced free access to its advanced Gemini models and tools for college students through 2026, giving it a competitive edge over the $20/month ChatGPT or Claude service fees.
- Beyond major tech companies, other entities are looking to higher education as a prime testing ground for AI rollout and adoption:
- Internet2 is a consortium focused on delivering AI and other advanced technology initiatives to support research and education. Its latest member, Glean, is supporting faculty and students through AI-powered, secure search functions.
- Netflix co-founder Reed Hastings is also jumping into AI — but not to drive technical developments. Instead, he’s donating $50 million to Bowdoin College (his alma mater) to create a research initiative on “AI and Humanity,” which will study the risks and consequences of the technology.
- In K-12, Anthropic published a case study on Panorama’s use of Claude to securely process and analyze student data from various sources. According to Panorama, this integration “enables districts to identify patterns across previously disconnected information, detect at-risk students earlier, and develop tailored support strategies.”
- And for those learners not in public schools or college, OpenAI’s Academy is a free set of educational resources (live and asynchronous) that aim to increase AI literacy for “individuals regardless of background.” OpenAI is partnering with several colleges and nonprofits to continue growing its Academy service offerings.
The Latest: AI Sector Updates
In our last issue, we highlighted a report from OpenAI about misbehavior in frontier models (i.e., ChatGPT, Claude, Gemini, Grok, etc.). Continuing that saga, Anthropic published a similar paper in April warning that reasoning models may be able to “lie” in their chain-of-thought process, which AI developers and researchers typically rely on to verify that an AI model is working properly.
Soon after, OpenAI updated its Preparedness Framework, which measures frontier models on a variety of metrics related to safety in AI development and deployment. Among OpenAI’s changes is a new category of AI capabilities that could be used for harm, such as “Long-range Autonomy” (taking actions across state or national borders), “Sandbagging” (intentionally underperforming), “Autonomous Replication and Adaptation” (self-evolution), or “Undermining Safeguards.”
These signals from OpenAI and Anthropic underscore how the benefits of advanced AI reasoning and agentic capabilities also have a dark side. As new models are continuously developed and released, more attention will need to be paid to safety risks.
In other news:
- OpenAI released its latest advanced reasoning models, o3 and o4-mini, to all paid subscribers. The latest models now integrate a range of existing capabilities (computer vision, web search, using Python for data analysis, image processing and generation, among others) directly into a single chatbot interface. OpenAI notes that “for the first time, these models can integrate images directly into their chain of thought. They don’t just see an image — they think with it.”
- More big news for OpenAI: the forthcoming release of its open-weight model (the first since ChatGPT 2), the release of ChatGPT-4.1 for developers using the API, and the integration of image generation into GPT-4o.
- Anthropic recently rolled out a Research mode similar to Google, Perplexity, and OpenAI’s DeepResearch functions (currently only available to Claude Max subscribers), and a new Google Workspace integration that allows Claude to pull information from Gmail, Google Calendar, and Google Drive.
- Meta released its Llama 4 “herd”, a set of models named Scout, Maverick, and Behemoth. These three — which mainly differ in size of compute — make up the first natively multimodal models that are open-source.
- At the end of March, Google quietly released Gemini 2.5, which outperformed several of the frontier models, and was only outshined by ChatGPT’s DeepResearch mode.
- Stanford University’s Institute for Human-Centered AI released its annual 2025 AI Index Report, with a set of top 10 trends.
- A recent study investigated the effects of AI on workers’ performance at Procter & Gamble, finding that those assigned to use AI not only matched the performance of their AI-less coworkers, but that the use of AI actually broke down cross-division silos and allowed teams to suggest more “balanced” solutions.
- Daniel Kokotajlo, a former governance researcher at OpenAI, issued a new set of AI predictions for 2027 and beyond, building on his (fairly accurate) projections from August 2021. Although Kokotajlo and his colleagues paint a vivid picture for what superintelligence may look like, this essay provides a good counter-vision. The authors argue that tech diffusion is much slower than its development or innovation, which means second order effects (e.g, economic or workforce consequences) may be further down the line than we might think.
Pioneering Policy: AI Governance, Regulation, Guidance, and More
Trump administration tariffs are shaking up the AI sector growth forecast with predictions of slowed investment until economic uncertainty winds down (read a deeper dive here). AI-related hardware companies like Apple and NVIDIA are trying to diversify their production pipelines — for example, NVIDIA is beginning to make semiconductor chips at a new plant in Arizona — but building up the domestic infrastructure to meet AI companies’ demands will take time.
Meanwhile, software companies like Google, Amazon, and Microsoft are monitoring how Trump’s immigration policies may affect their current and future talent pools. A recent Institute for Progress investigation found that “60% of the top US-based AI companies have at least one immigrant founder,” and the National Science Foundation reported in 2024 that 58% of doctorate-level computer and mathematical scientists in the U.S. workforce were born outside the country. Moving forward, industry leaders and policymakers should be watching closely to see how the Trump administration’s domestic policies inhibit or promote growth in the country’s AI sector moving forward.
In other news:
- The Trump administration is pushing federal agencies to adopt AI faster, ordering agencies to identify “chief AI officers” and develop plans for removing barriers to AI use within six months.
- Three weeks before Executive Order 14277 was signed, the House Committee on Education & Workforce held a hearing on AI in education. The Committee concluded that the AI sector is moving too quickly, and that local educators are best positioned to respond to AI in classrooms (in other words, unlike the executive branch, Congress will likely not be taking action on AI in education anytime soon).