February 24, 2025

The Leading Indicator: AI in Education Issue Seven

By Alex Spurrier | Marisa Mission

Share this article

Before you unpack the latest Leading Indicator updates on artificial intelligence (AI) in education, be sure to check out an AI survey our colleagues recently released that tracks school system leaders’ views of the technology among staff and students. Please give it a read and share with your networks!

2025 opened with a bang for AI: OpenAI, Perplexity, xAI, and Google launched “deep research” capacities for their chatbot products. Instead of responding to prompts one word at a time, these new models take time to “think” through their process step-by-step (for more detail, Ethan Mollick has you covered). And the release of DeepSeek’s R1 — an open source AI model developed by a Chinese company — sent shockwaves through tech markets for its quality and (potentially dubious) cost claims (check out Claire Zau’s great explainer on R1).

More effective and efficient models aren’t the only AI news — more AI tools with increased ability to act autonomously, or “agentic” products are entering the market. OpenAI is providing its pro subscribers with access to Operator, an AI-powered tool that can use web browsers to accomplish tasks autonomously. And Perplexity launched a mobile assistant capability on its Android app that can “see” users’ screens and complete tasks by opening and using apps.

The improved quality and growing capacities of AI tools are impressive, but they raise questions for those of us working in education. The latest AI models perform similarly to human experts on doctorate-level science and math benchmark tests. Economist Tyler Cowen thinks that the output generated by OpenAI deep research in just a few minutes is “comparable to having a good PhD-level research assistant, and sending that person away with a task for a week or two, maybe more.” In the Understanding AI newsletter, Timothy B. Lee asked a sample of his readers to try the OpenAI and Google deep research products and found that most think the OpenAI tool can produce work ranging from what an entry-level employee to an expert in their field could produce. In the dawning age of AI researchers and agents, what capacities should schools focus on cultivating in people to complement (and maybe compete) with these emerging technologies?

Now more than ever, cultivating deep reservoirs of content knowledge across multiple domains is essential. As the quality of AI outputs gains a glossier, crypto-academic veneer, domain expertise will be a critical safeguard to identify errors when hallucinations occur. AI systems might be able to draft 95% of a mandatory Securities and Exchange Commission (SEC) Form S-1 filing at an investment bank, but figuring out the final 5% still requires financial and legal experts to ensure the document complies with SEC regulations prior to being listed on a public exchange. And when essential information to answer a prompt isn’t easily accessible on the internet, AIs are blind to insights that might flow from those data points.

What does this all add up to? Knowledge still matters in the age of AI. There’s no shortcut to developing our own neural networks based on long-term memory and experience. The good news is that there’s a growing movement to support greater adoption of knowledge-rich curricula in schools. There may be a role for AI to support that work, but it must be done thoughtfully. Recent research indicates that access to ChatGPT can induce “metacognitive laziness” among students. If we want students to thrive in a world that is reshaped by ever-powerful (and ever-evolving) AI models, we must work to ensure they have the cognitive tools to do so.

Education Evolution: AI and Learning

The Latest: AI Sector Updates

Pioneering Policy: AI Governance, Regulation, Guidance, and More

 

*Editor’s note: For a complete list of current and former clients and funders, including CZI, visit our website

More from this topic

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
ErrorHere