September 4, 2025

The Leading Indicator: AI in Education Issue Thirteen

By Alex Spurrier | Marisa Mission

Share this article

Over the past few issues, we’ve focused a lot on how students are using artificial intelligence (AI) in their everyday academic lives. But teens aren’t just using AI for homework help — some are bonding with these tools. Adolescent-AI relationships can produce blurred boundaries with reality, as illustrated in a recent New York Times Magazine profile of a student who chats daily with a Character.AI avatar. These relationships can also lead to tragedy, as was the case in the recent death by suicide* of a teenager whose ChatGPT interactions sometimes validated harmful plans rather than steering them to help. Together, these stories show how “AI relationships” can feel supportive for some students and dangerously absorbing for others, especially in long chats where safeguards can fray.

The fallout has begun: The family in the latter case filed a wrongful-death suit against OpenAI; the company, in turn, announced new measures to address how ChatGPT responds to users’ signs of mental and emotional distress. Meanwhile, Illinois drew a bright line with its new Wellness and Oversight for Psychological Resources Act, barring use of AI tools for therapy – violations can be punished with a fine of up to $10,000. In passing the Act, Illinois joins Nevada in outlawing AI-based mental health services; several other states are considering similar regulations.

We first raised the issue of children developing parasocial relationships with AI chatbots in one of our earliest August 2024 issues of this newsletter, calling for “extreme caution” when it comes to introducing AI as part of the social development of young minds. That remains true today, especially as AI tools become more capable, compelling, and popular with children and teens.

Education Evolution: AI and Learning

New polling and usage data show that teachers and professors are leaning on AI in strikingly similar ways. In K-12, a nationwide April Gallup/Walton poll found that 60% of teachers used AI in the 2024-25 school year (SY), mainly for instructional support including lesson planning and worksheet creation. In higher education, Anthropic’s analysis of 74,000 Claude chats from faculty found that a majority were tied to curriculum development (57%). Both datasets point to relatively low uptake of AI for grading: just 16% of K-12 teachers said they use AI tools to grade, and only 7% of postsecondary chats involved assessing student work. For now, the loop where AI writes assignments, students complete them with AI, and other AI tools do the grading hasn’t fully materialized…yet. Instructional design is being reshaped but grading remains mostly human — a trend worth watching as schools head into SY25-26.

In other news:

The Latest: AI Sector Updates

The clearest signal yet that AI is reshaping the labor market just arrived — and it’s hitting the first rung of the career ladder. A new Stanford University analysis finds a 13% relative decline in employment for the youngest professionals in the most AI‑exposed occupations – such as software developers and customer service representatives – since late 2022. Older workers’ employment in those same jobs are roughly flat to rising. For more analysis of this research, check out what John Bailey, Noah Smith, and Derek Thompson are saying.

One thing is clear: K-12, institutions of higher education, and employers should be thinking about how they adapt their talent pipelines to an economy that offers fewer traditional entry-level jobs for younger workers. 

In other news:

Pioneering Policy: AI Governance, Regulation, Guidance, and More

AI is coming to the nation’s capital in new ways and vice versa. Internally, a new General Services Administration deal with OpenAI deal gives the entire federal executive branch access to ChatGPT Enterprise for a nominal $1 per agency for one year, paired with training and security guardrails to standardize government use. Externally, the U.S. will acquire 9.9% of Intel via an $8.9 billion stock purchase funded by previously-awarded money from the Creating Helpful Incentives to Produce Semiconductors (CHIPS) and Science Act. The government’s stake in Intel is passive (no board seat; votes with management), but it puts federal capital on the company’s cap table.

Direct federal investment in a company that produces a key aspect of AI technology (semiconductor chips) has its supporters and detractors. Ben Thompson of Stratechery argues the equity move is a credible, long‑horizon signal to keep a U.S. foundry viable. On the other side of the debate, economist Tyler Cowen is wary of the risks associated with creeping state ownership of industry. Either way, it signals a new phase of the federal government’s involvement in the AI sector.

In other news:

 

*EDITOR’S NOTE: This newsletter includes mention of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988. There is also an online chat at 988lifeline.org.

More from this topic

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
ErrorHere