When artificial intelligence (AI) comes up in the education sector, one of the most frequently asked questions we hear is: How should AI be used in schools? It’s a good question, but after this year’s ExcelinEd National Summit on Education it might be too narrow.
An AI-focused panel at the Summit offered some specific examples of thoughtful AI pilots from around the country. Adam DiBenedetto, Director of Academic Innovation at the Louisiana Department of Education, shared the Pelican State’s pilot of AI-powered software to support tutoring in reading for English learners (ELs). David Zatorski, vice principal of First Avenue School in Newark, New Jersey spoke about how his team extended their use of Khan Academy for math to include a pilot of the Khanmigo chatbot. In both cases, these AI pilots offer incremental applications of technology to specific contexts: scaling reading tutoring services for ELs in Louisiana and building on the existing use of Khan Academy for math in New Jersey.
Other Summit panels urged a more cautious approach. Arkansas Gov. Sarah Huckabee Sanders spoke about her state’s recent restrictions on student cell phone use (19 states have similar restrictions and/or bans). And “Stolen Focus” author Johann Hari pushed Summit attendees to engage with broader AI debates about “What tech, designed in what way, working in whose interests?”
Outside the education conference circuit, it’s high time to DTR: define the relationship. Right now, too many schools are in “situationships” with tech tools — their roles are undefined, fleeting, and often directionless. What does this look like in practice? Research from Instructure indicates that in the 2023-24 school year, students interacted with an average of 45 unique ed tech tools. In a typical 180-day school year, that’s one new ed tech tool every four instructional days.
We’ve also seen recent research demonstrating that analog and tactile approaches to learning may be more effective than digital methods in some cases. Elementary- and middle-grade students have better comprehension when reading from physical books versus screens. Taking notes by hand can provide more cognitive benefits than typing. These findings are urgent at a time when addressing COVID-19 pandemic-related learning loss remains a massive challenge.
School systems are navigating seemingly contradictory pressures to increase some uses of technology (AI) and limit others (cell phones). Tech broadly and AI specifically can likely help students develop the knowledge and skills they need to thrive, but the challenge (and opportunity) is to determine how technology should be part of that effort. Moving forward, sector leaders should prompt a DTR talk about the role technology — not just AI — should play in K-12 classrooms.
Education Evolution: AI and Learning
A recent “60 Minutes” profile highlighting Khanmigo’s potential is likely to drive greater interest in one of the most prominent AI products in the K-12 sector. The profile shows students using the chatbot in a chemistry class, along with a vignette where Anderson Cooper revises an essay he wrote as a sixth grader. It even includes an example of the most popular chatbot (ChatGPT) and its forthcoming “vision” making a mistake when calculating the area of a triangle drawn on a chalkboard. The profile ends with an aspirational message from Founder and CEO Sal Khan:
“The hope here is that we can use AI and other technologies to amplify what a teacher can do so they can spend more time doing a good job standing next to a student, figuring them out, [and] having a person-to-person connection.”
It’s a hope worth working towards, but if you’re inclined to be an AI optimist, it’s worth pressure-testing your thinking with the perspectives of thoughtful critics. Dan Meyer weighed in with his skepticism coming out of the “60 Minutes” profile specifically and Marc Watkins highlights the challenges facing educators as students outsource their reading to ChatGPT. For an even wider lens, check out Benjamin Riley’s piece mapping out the different forms of AI skepticism.
🐘 A White Elephant Assortment of Edu-AI News
- Bellwether Senior Associate Partner Amy Chen Kulesa joined Rachael Maves’ podcast for a conversation about AI in education.
- Renaissance Philanthropy is partnering with the Walton Family Foundation to launch an AI and education program focused on “identifying breakthrough ideas at the intersection of AI and learning science.”
- ERS shared its perspective on how AI might change the use of people, time, and money in K-12 schools.
- The rise of ChatGPT led to the fall of Chegg, a “homework help” (cheating?) service.
- In a Wild West marketplace, some ed tech companies are pushing for the development of “responsible AI products.”
- The founder and former CEO of AllHere, the company behind the LAUSD “Ed” chatbot fiasco, is facing federal charges for defrauding investors.
The Latest: AI Sector Updates
OpenAI’s shift from a nonprofit model to a public benefit corporation has garnered attention and some litigation, but more than anything else, the shift is about generating cash. CEO Sam Altman put it bluntly: “The simple thing was we just needed vastly more capital than we thought we could attract [as a nonprofit].”
Why? Costs associated with developing more advanced models and powering the growth of AI services are going up, up, up. Estimates from Bain & Company project that the market for Generative AI might reach nearly $1 trillion by 2027. It’s an eye-popping figure that reflects the growing energy, hardware, and software needs of this rapidly-growing sector.
But simply plowing more resources into computing capacity to train ever-larger Large Language Models (LLMs) might not be enough to continue the current trajectory of LLM improvement. Companies may need to pursue different approaches, such as improving how their models process information after they’ve been trained and/or focusing on turning current model capabilities into better products.
It’ll be worth seeing if model scaling makes a comeback or if the sector doubles-down on more domain-specific and product-focused improvements in the year ahead.
🎁 ‘Tis the Season for New Models
- OpenAI is launching several new products as part of its “12 days of OpenAI” including o1 and o1 pro models for ChatGPT, a new version of its Sora video generator, and a slew of new developer tools.
- Google DeepMind announced a new video generator, Veo 2, which will compete with OpenAI’s Sora in the next-gen AI video space.
- All X users now have access to a new Grok 2 model from xAI, with image-generation features set to roll out in the coming weeks.
- The new Llama 3.3 model can meet the performance of Meta’s largest model (Llama 3.1) at a lower price point.
Pioneering Policy: AI Governance, Regulation, Guidance, and More
For federal, state, and local governments, AI promises two things sorely needed among all the bureaucratic red tape: speed and efficiency. Wins like the Department of the Treasury’s recovery of $1 billion from fraudsters, in just one year, highlight this potential. Using AI for high-stakes decision-making generates apprehension and criticism, however, as Nevada found out when it used AI to identify at-risk students: An alarming statewide drop in the number of students identified — as well as the subsequent drop in school funding — sparked questions about transparency, fairness, and the appropriate role of algorithms.
This tension is also apparent in the U.S. House of Representatives’ Bipartisan AI Task Force’s final report: while governments should “be wary of algorithm-informed decision-making,” the Congressional authors still recommend using AI to reduce administrative burden and bureaucracy. There seems to be an emerging fine line, but more guidance will be needed to actually determine its fault lines.
🔮 Post-Election Predictions: Looking Into the Crystal Ball
- Deregulation is the name of the game: Young AI just got a ticket to run wild.
- Neil Chilson, head of AI policy at the Abundance Institute, makes 10 predictions on specific actions a Trump administration might take on AI policy.
- Axios and the New York Times had their own predictions, which largely aligned on things like content moderation and TikTok’s survival. The Times also highlighted how Silicon Valley would react to Trump policies, and what that means for the tech industry in the long run.
⚖️ To Regulate or Not to Regulate (continued…)
- Anthropic put out an urgent call for policymakers to create “proactive” regulation on AI companies and offered some ideas based on its own Responsible Scaling Policy.
- Just as urgently, R Street’s Adam Thierer is calling on Texas lawmakers to reject draft legislation, citing concerns similar to those brought up against the failed California Senate Bill 1047.
- A recent report from the Center on Reinventing Public Education gives the sector a snapshot in time of the policy landscape (or lack thereof) for AI in education.
📚 New Resources
- The U.S. Department of Education released a new toolkit for K-12 school and district leaders that aims to help them navigate the adoption of AI-based tools. Major topics include mitigating risks, building intentional strategies, and maximizing impact through regular evaluation.
- The U.S. Commission on Civil Rights released a report on The Rising Use of AI in K-12 Education, including recommendations to support procurement, implementation, and auditing of AI tools.
The international Organization for Economic Cooperation and Development also released a recent toolkit that looks at the landscape of approaches to AI in the public sector for G7 countries. The toolkit offers suggestions for ways to solve common implementation problems and a framework “journey” for implementing AI solutions.