Over the past few issues, we’ve focused a lot on how students are using artificial intelligence (AI) in their everyday academic lives. But teens aren’t just using AI for homework help — some are bonding with these tools. Adolescent-AI relationships can produce blurred boundaries with reality, as illustrated in a recent New York Times Magazine profile of a student who chats daily with a Character.AI avatar. These relationships can also lead to tragedy, as was the case in the recent death by suicide* of a teenager whose ChatGPT interactions sometimes validated harmful plans rather than steering them to help. Together, these stories show how “AI relationships” can feel supportive for some students and dangerously absorbing for others, especially in long chats where safeguards can fray.
The fallout has begun: The family in the latter case filed a wrongful-death suit against OpenAI; the company, in turn, announced new measures to address how ChatGPT responds to users’ signs of mental and emotional distress. Meanwhile, Illinois drew a bright line with its new Wellness and Oversight for Psychological Resources Act, barring use of AI tools for therapy – violations can be punished with a fine of up to $10,000. In passing the Act, Illinois joins Nevada in outlawing AI-based mental health services; several other states are considering similar regulations.
We first raised the issue of children developing parasocial relationships with AI chatbots in one of our earliest August 2024 issues of this newsletter, calling for “extreme caution” when it comes to introducing AI as part of the social development of young minds. That remains true today, especially as AI tools become more capable, compelling, and popular with children and teens.
Education Evolution: AI and Learning
New polling and usage data show that teachers and professors are leaning on AI in strikingly similar ways. In K-12, a nationwide April Gallup/Walton poll found that 60% of teachers used AI in the 2024-25 school year (SY), mainly for instructional support including lesson planning and worksheet creation. In higher education, Anthropic’s analysis of 74,000 Claude chats from faculty found that a majority were tied to curriculum development (57%). Both datasets point to relatively low uptake of AI for grading: just 16% of K-12 teachers said they use AI tools to grade, and only 7% of postsecondary chats involved assessing student work. For now, the loop where AI writes assignments, students complete them with AI, and other AI tools do the grading hasn’t fully materialized…yet. Instructional design is being reshaped but grading remains mostly human — a trend worth watching as schools head into SY25-26.
In other news:
- A “needs improvement” grade for edu-chatbots: Common Sense Media’s review of four “AI teacher assistants” (Curipod, Gemini for Classroom, Khanmigo, and MagicSchool) notes their potential time‑savings but flags that they “pose a moderate risk to students and educators,” including biased outputs and shaky Individualized Education Program (IEP) generators.
- Ransomware has been a problem for many K-12 districts in recent years; now AI is fueling the next wave of this malicious cybercrime.
- Over at the Center on Reinventing Public Education, Robin Lake asks: Will SY25-26 bring an AI reckoning in K-12 education?
- Golden State partnerships: California announced deals with Google, Microsoft, Adobe, and IBM to deliver free AI training and tools to colleges (access to Gemini and NotebookLM) across the state’s 116‑college, 2.1 million‑student postsecondary system.
The Latest: AI Sector Updates
The clearest signal yet that AI is reshaping the labor market just arrived — and it’s hitting the first rung of the career ladder. A new Stanford University analysis finds a 13% relative decline in employment for the youngest professionals in the most AI‑exposed occupations – such as software developers and customer service representatives – since late 2022. Older workers’ employment in those same jobs are roughly flat to rising. For more analysis of this research, check out what John Bailey, Noah Smith, and Derek Thompson are saying.
One thing is clear: K-12, institutions of higher education, and employers should be thinking about how they adapt their talent pipelines to an economy that offers fewer traditional entry-level jobs for younger workers.
In other news:
- Browser’s little helper: Anthropic is launching a new Claude agent that operates as a Google Chrome extension.
- You miss 100% of the shots you don’t take: Perplexity offers to buy Chrome.
- The pen may be mightier than the sword, but the checkbook is mightier still: Anthropic settles a lawsuit brought by authors.
- Watching the bots work: John Bailey on why chain-of-thought monitoring in GPT-5 is important for AI safety.
Pioneering Policy: AI Governance, Regulation, Guidance, and More
AI is coming to the nation’s capital in new ways and vice versa. Internally, a new General Services Administration deal with OpenAI deal gives the entire federal executive branch access to ChatGPT Enterprise for a nominal $1 per agency for one year, paired with training and security guardrails to standardize government use. Externally, the U.S. will acquire 9.9% of Intel via an $8.9 billion stock purchase funded by previously-awarded money from the Creating Helpful Incentives to Produce Semiconductors (CHIPS) and Science Act. The government’s stake in Intel is passive (no board seat; votes with management), but it puts federal capital on the company’s cap table.
Direct federal investment in a company that produces a key aspect of AI technology (semiconductor chips) has its supporters and detractors. Ben Thompson of Stratechery argues the equity move is a credible, long‑horizon signal to keep a U.S. foundry viable. On the other side of the debate, economist Tyler Cowen is wary of the risks associated with creeping state ownership of industry. Either way, it signals a new phase of the federal government’s involvement in the AI sector.
In other news:
- Buckeyes leading the way: Ohio is the first state in the nation to require all public K-12 districts to adopt an AI use policy.
- On your marks, get set, vibecode! The White House released details on how students and teachers can participate in the Presidential AI Challenge.
- Gearing up for political battles: Two pro-AI super PACs will have about $200 million to fund their advocacy in the 2026 midterm elections, as well as in key state legislatures such as California, Illinois, New York, and Ohio.
- Another bite at the AI regulation apple: Colorado lawmakers want to revise AI regulations they passed in 2024, but there’s little agreement on what those changes ought to be.
- Replacing the bureaucrats with bots? Albanian lawmakers are using AI to reduce corruption and have floated the concept of having an entirely AI-run ministry in the future.
*EDITOR’S NOTE: This newsletter includes mention of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988. There is also an online chat at 988lifeline.org.