February 19, 2026

The Leading Indicator: AI in Education Issue Sixteen

Share this article

Much of the conversation about artificial intelligence (AI) in education oscillates between fear and hype. But a team of us at Bellwether wanted to do something different: investigate what thoughtful AI implementation in education actually looks like. Through interviews with leaders from more than 20 organizations across the country, we found a community of ed tech practitioners actively working through the messy middle ground to ensure that AI is truly boosting student outcomes.

Built for Learning pulls back the curtain on the design trade-offs and implementation challenges that shape AI-powered ed tech. The first installment highlights five themes that emerged from this work: balancing cognitive support with productive struggle, defining appropriate roles for teachers versus technology, navigating measurement challenges, building infrastructure that reflects both practical and pedagogical needs, and finding sustainable business models that don’t sacrifice learning outcomes for growth. Five accompanying case studies — spanning literacy, writing, mathematics, career-connected learning, and school operations — illustrate what these trade-offs look like across different contexts.

The series is deliberately practical. Practitioners and educators can use these themes to ask sharper questions of vendors about how their AI tools will serve specific students. Ed tech developers might find new approaches to common challenges. And policymakers and funders can better assess whether AI tools align with educational priorities and what infrastructure conditions matter for effective implementation.

As tech giants like OpenAI, Google, and Microsoft increasingly turn their attention to education, understanding what’s happening “under the hood” of AI-powered tools is an essential priority. The organizations profiled in this series may lack Big Tech’s resources, but their deep pedagogical expertise offers lessons for anyone trying to distinguish AI tools built for learning from those built for scale.

Education Evolution: AI and Learning

Google and Khan Academy announced a major partnership at BETT 2026: Gemini-powered writing and reading coaches for Grades 5-12. The tools don’t generate essays for students — instead, they guide learners through outlining, drafting, and editing. CEO Sal Khan framed the partnership around a specific pain point: “School district leaders are telling us that one of the biggest challenges they face right now is helping middle and high school students who are behind academically, especially in reading and language arts.”

Meanwhile, Anthropic launched the AI Literacy & Creator Collective with Teach For All, bringing AI education to more than 100,000 educators across 63 countries. The model positions teachers as “co-architects shaping how AI develops” rather than passive consumers. Educators get Claude access while providing ground-level feedback to inform product development. Teachers are already building practical tools: a climate curriculum in Liberia, a gamified math app in Bangladesh, interactive digital workspaces in Argentina.

These partnerships reflect a broader bet: AI in education works best when it amplifies human teaching rather than replacing it. But the question of what to amplify remains contested. A New America brief argues the U.S. needs a comprehensive national digital literacy framework and that without foundational tech skills, AI literacy initiatives will only widen the digital divide.

In other news:

The Latest: AI Sector Updates

Software engineers spent their holiday breaks getting “Claude-pilled.” That’s the term the Wall Street Journal used to describe the revelation many experienced after trying Claude Code, Anthropic’s autonomous coding agent. Some completed year-long projects in a week while noncoders built their first software programs. Noah Smith’s widely-shared essay put it bluntly: the age of humans who could think like computers is drawing to a close and that software will soon be “conjured up rather than crafted.”

But these are power-user tools requiring the use of the command line — a programmer tool most people will never touch. The more significant development may be Anthropic’s push to bring agentic AI to everyday knowledge workers. Cowork brings Claude Code’s capabilities to the desktop app for noncoders: file organization, research synthesis, and document creation. Claude in Excel embeds AI directly into spreadsheets. And Claude in PowerPoint, released in beta last month, can take structured data from Excel and “bring it to life visually” in presentations. These consumer-friendly interfaces are where agentic AI meets the average office worker — and, eventually, the average K-12 administrator, educator, and student.

The gap between early adopters and everyone else is stark. As New York Times columnist Kevin Roose observed: “People in [San Francisco] are putting multi-agent claudeswarms in charge of their lives … people elsewhere are still trying to get approval to use Copilot in Teams, if they’re using AI at all.” For educators, that yawning divide is both a warning and an opportunity to reconsider educational approaches in the context of increasingly agentic AI.

In other news:

  • OpenAI unveiled ChatGPT Health, letting users connect medical records and wellness apps — with 230 million users already asking health care questions on a weekly basis.
  • Days later, Anthropic announced Claude for Healthcare, featuring partnerships with AstraZeneca, Sanofi, and Banner Health.
  • Anthropic’s latest Economic Index report found college-level tasks are completed about 12 times faster with AI — but warned of a “de-skilling” effect that could hollow out complex parts of jobs like teaching.
  • OpenAI is testing ads in ChatGPT for free and low-cost users, with CEO Sam Altman framing it as a way to expand access to billions who can’t afford paid plans.
  • A Brookings brief argues that entry-level jobs need the equivalent of medical residencies to survive AI automation.
  • The American Enterprise Institute’s (AEI’s) John Bailey reflects on what a year of living with AI taught him, offering a practitioner’s perspective on integrating AI into daily work.*

Pioneering Policy: AI Governance, Regulation, Guidance, and More

Something unexpected happened in AI politics at the turn of the year: Bernie Sanders and Ron DeSantis found common ground. The Vermont senator and Florida governor — who agree on virtually nothing — have both emerged as leading skeptics of the AI industry’s data center boom, citing rising electricity costs, grid strain, and labor market disruption. Their unlikely alignment signals a brewing political reckoning that could slow AI infrastructure development if it reaches broader bipartisan consensus.

The grassroots backlash is real: 142 activist groups across 24 states are now organizing against data center expansion. Trump voters in rural Oklahoma are fighting proposed facilities alongside Democratic Socialists of America chapters. Between April and June 2025 alone, data center proposals valued at $98 billion across 11 states were blocked or delayed. With the economy and affordability at the center of American politics, the data center fight is becoming a midterm election flashpoint — one that educators should watch closely, given AI infrastructure’s implications for school district electricity costs and local tax bases.

In other news:

*Editor’s note: Some organizations and funders in this newsletter, including the Overdeck Family Foundation, are past or present clients or funders of Bellwether. In addition, Bellwether has partnered with AEI’s John Bailey on prior AI publications.

More from this topic

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
ErrorHere