If you like the content we’re delivering in this newsletter, vote for Bellwether’s artificial intelligence (AI)-themed panel at SXSW EDU 2025, “AI in Action: Innovative Approaches in Schools and Districts,” featuring Amy Chen Kulesa. To vote, sign in or create an account for free at panelpicker.sxsw.com.
A new school year is already underway in parts of the country, and many education policymakers and school system leaders probably feel like they’re still playing catch-up when it comes to AI. Products boasting AI features are proliferating in the education marketspace as the underlying technology continues to evolve. Staying up to date with these dynamics is a Sysyphean challenge for researchers, let alone for those charged with leading school systems or for lawmakers for whom education policy is but one part of their legislative committee assignments.
School system leaders and education policymakers should avoid getting caught up in day-to-day developments in edtech product releases. Instead, they should focus on assessing the ever-evolving edtech landscape with a set of guiding principles. On the former, Ben Riley from Cognitive Resonance and Paul Bruno from the University of Illinois Urbana-Champaign offer a valuable perspective for education leaders to consider in their latest release at Cognitive Resonance, which offers a basic look into how Large Language Models (LLMs) work, their limitations, and the “education hazards” these systems may pose if they are used inappropriately. On the latter, CRPE recently published lessons from a forum they hosted on driving “meaningful and positive change in education” with AI that includes a road map to drive this work forward. Combined, these publications can help school system leaders feel a little more confident about how to manage AI in classrooms and administrative offices in the 2024-25 school year.
But there are a few back-to-school locations that are overlooked in most conversations about AI in schools: the cafeteria, hallways, the playground — the cauldrons of peer socialization. In the wake of COVID-19 pandemic years of social disruption fueled by remote schooling, children are increasingly engaging with AI avatars in parasocial relationships. These AI companions aren’t the Tamagotchi of years past — they are an engaging, reciprocal outlet that can conjure the feeling of friendship.
Should AI play a role in human social development? There might be acute instances where AI chatbots could do some good, but examples of adult AI “friendship” are unsettling in pop culture and soon, as a product. When it comes to the developing minds of children, extreme caution is warranted on this front. School cellphone bans may be a way to limit the hours children have available to text — or even speak — with AI “friends” to after-school hours. One way or another, this AI social dynamic will seep into schools; school system leaders and policymakers should prepare for what that could mean for their schools and communities.
Education Evolution: AI and Learning
While much of the chatter on AI and education has centered on K-12, Mary Meeker — a well-known tech analyst and venture capitalist — highlighted that “AI’s impact on higher education will be significant” and that universities will need a “mindset change” to stay relevant for younger generations. OpenAI is right there with her, with the recent launch of ChatGPT Edu as a resource for institutions of higher education (IHEs) looking to deploy AI. And many IHEs are answering the call: Earlier last month, Morehouse College announced the use of AI teaching assistants that will provide students with extended after-hours support. Beyond the classroom, Idaho’s Generative AI in Higher Education Fellows will “help create… eventual [State] Board policies to ensure [AI] is used in productive and ethical ways on Idaho’s college and university campuses.” K-12 leaders can learn from these efforts — what works well, what doesn’t — to inform their approach to AI.
🔤 The ABCs of GPTs: A Need for Foundational AI Literacy
- The President and CEO of Lumina Foundation, Jamie Merisotis, argues that when it comes to preparing students for a world with ubiquitous AI, IHEs play a crucial role by pulling together coherent learning pathways.
- Students agree: Recent survey results from Cengage found that a majority (from 62% to 74%) of college graduates don’t feel prepared to use generative AI tools.
- Alex Kotran, CEO and co-founder of the AI Education Project, highlighted the need for “foundational understanding” of AI — beyond prompt engineering or brainstorming new capabilities. AI literacy, he argues, must be woven into curricula beyond STEM so that everyone can have thoughtful conversations without being a technologist.
- Recently launched Eureka Labs is taking a stab at that vision. Envisioning an “AI native” school, it is developing a new “undergraduate-level class that guides the student through training their own AI.”
🦾 The Cyborg-ification of Teaching
- New Hampshire’s Executive Council approved a $2.3 million, federally funded contract that will pay for districts to incorporate Khanmigo into the classroom for the next year.
- The Bureau of Labor Statistics showcased a report that demonstrated high potential for “computer-assisted learning” to scale high-dosage tutoring. Important note: the technology studied used AI algorithms, but no chatbots.
- Early rollouts of teaching assistants such as Origin and Khanmigo demonstrate some success, but highlight that AI can’t do one crucial part of learning: make students care. In that respect, schools might be like “great white sharks,” a species which has outlived several extinctions because “put simply, they work.”
⚖️ Accountability With, By, and for AI
- Beyond classroom assistants, AI models could also become classroom observers, which could help teachers improve their pedagogical skills independently. Integrating AI into teacher evaluations, however, is not on the table.
- The Wall Street Journal recently reported that OpenAI has a tool that is “99.9% effective” in identifying ChatGPT-generated text, but the tool remains unreleased over concerns that include the impact of false positives and its potential to drive user engagement down. With or without that tool, reframing academic integrity as a core part of technical literacy is probably a good idea.
- A recent report found that access to a basic ChatGPT-powered tutor can become a “crutch” and harm students’ learning once the tool is taken away. However, those losses were mitigated among students with access to a version of ChatGPT designed to not reveal answers and to push students to explain their thinking.
The Latest: AI Sector Updates
Yet another new model takes the stage in Meta’s Llama 3.1, which claims to outperform both Chat GPT-4o and Claude 3.5 Sonnet. Unlike ChatGPT and Claude, however, Llama is open source, which Meta Founder and CEO Mark Zuckerberg is betting will help Llama 3.1 overtake its competitors. OpenAI, meanwhile, seems to be looking in a different direction: The company’s latest release, SearchGPT, is designed to offer “answers first” and compete with Google’s AI-integrated search engine.
💸The ROI on Generative AI is Not Looking Good
- Llama 3.1 was trained with “over 16,000 of Nvidia’s ultraexpensive H100 GPUs,” leading The Verge to guess that its development cost “hundreds of millions of dollars.” Unsurprisingly, Llama 3.1 is only free to use for a limited number of prompts, suggesting that it’s expensive to run at full scale.
- Extremely high development and operating costs (to the tune of billions of dollars), as well as murky projections for potential revenue, plague the generative AI industry.
- While all generative AI companies face these challenges, some analysts are concerned about the sustainability of OpenAI’s approach. Its latest launch, GPT-4o mini, promises greater cost efficiency, but that might not mean much in the race to continually build bigger and better models.
🚩 Big Data, Big Challenges
- Much like a puppy with kibble, growing AI models need increasingly larger datasets with which to train and learn — so much so that there may not be enough high-quality data in the world to train the newest models.
- The dearth of training data is exacerbated by concerns over copyright infringement, fair compensation, data worker exploitation, and data privacy.
- One potential solution is to train new AI models on synthetic, or AI-generated, data. However, researchers caution that this “inbreeding” would create not only worse models, but a “self-consuming loop” that devours the stability of the AI model.
- Some experts worry that eventually, the increasing amount of AI-generated content on the internet could poison future training data (i.e., the internet itself), stagnating the progress of generative AI.
Pioneering Policy: AI Governance, Regulation, Guidance, and More
Over the summer, a slew of states have released AI frameworks and guidance for districts, vendors, and other stakeholders, including Minnesota, New Jersey, North Dakota, and Wyoming. The Los Angeles County Office of Education also published a set of guidelines for its districts to use in managing risks as AI-based technologies are rolled out. It’s unknown whether or how these might have been informed by the Los Angeles Unified School District fiasco from June, which underscores that AI in K-12 is a bit like the “Wild West” — Some schools and districts are moving much faster than guidance can be developed. Meanwhile, the U.S. Department of Education recently published guidelines for developers looking to design AI-based technologies (whether LLM-based or not) for educational purposes.
📜 To Regulate, or Not To Regulate: That is the Big Question
- The Fifth Circuit Court of Appeals in New Orleans ruled that the Federal Communication Commission’s Universal Service Fund, which subsidizes telephone and broadband services for rural and low-income communities, is unconstitutional. Although the news is not directly AI-related, the decision will ultimately impact rural and low-income students’ access to AI-based services. Erin Mote, CEO of InnovateEDU, lamented on X that, “This is among the most important fights for the critical public infrastructure in our country, for digital equity and for access. […] The impact of this decision will affect every community, every school and every library.”
- On the other hand, the Federal Trade Commission ordered eight companies with AI-powered service pricing to submit data on the technology’s potential impact on consumers’ privacy and protection as well as market competition.
- The U.S. Senate passed updated versions of the Kids Online Safety Act (KOSA) and the Child Online Privacy Protection Act (COPPA) in a bipartisan vote, despite KOSA raising questions about equitable access to information and potential AI weaponization. The bills will go to the U.S. House of Representatives in September.
AI industry leaders are “war gaming” various scenarios that a second Trump administration could bring, including an about-face in policy that might change the sector’s trajectory.