December 20, 2024

The Leading Indicator: Issue Six

By Alex Spurrier | Marisa Mission

Share this article

When artificial intelligence (AI) comes up in the education sector, one of the most frequently asked questions we hear is: How should AI be used in schools? It’s a good question, but after this year’s ExcelinEd National Summit on Education it might be too narrow. 

An AI-focused panel at the Summit offered some specific examples of thoughtful AI pilots from around the country. Adam DiBenedetto, Director of Academic Innovation at the Louisiana Department of Education, shared the Pelican State’s pilot of AI-powered software to support tutoring in reading for English learners (ELs). David Zatorski, vice principal of First Avenue School in Newark, New Jersey spoke about how his team extended their use of Khan Academy for math to include a pilot of the Khanmigo chatbot. In both cases, these AI pilots offer incremental applications of technology to specific contexts: scaling reading tutoring services for ELs in Louisiana and building on the existing use of Khan Academy for math in New Jersey. 

Other Summit panels urged a more cautious approach. Arkansas Gov. Sarah Huckabee Sanders spoke about her state’s recent restrictions on student cell phone use (19 states have similar restrictions and/or bans). And “Stolen Focus” author Johann Hari pushed Summit attendees to engage with broader AI debates about “What tech, designed in what way, working in whose interests?”

Outside the education conference circuit, it’s high time to DTR: define the relationship. Right now, too many schools are in “situationships” with tech tools — their roles are undefined, fleeting, and often directionless. What does this look like in practice? Research from Instructure indicates that in the 2023-24 school year, students interacted with an average of 45 unique ed tech tools. In a typical 180-day school year, that’s one new ed tech tool every four instructional days. 

We’ve also seen recent research demonstrating that analog and tactile approaches to learning may be more effective than digital methods in some cases. Elementary- and middle-grade students have better comprehension when reading from physical books versus screens. Taking notes by hand can provide more cognitive benefits than typing. These findings are urgent at a time when addressing COVID-19 pandemic-related learning loss remains a massive challenge.

School systems are navigating seemingly contradictory pressures to increase some uses of technology (AI) and limit others (cell phones). Tech broadly and AI specifically can likely help students develop the knowledge and skills they need to thrive, but the challenge (and opportunity) is to determine how technology should be part of that effort. Moving forward, sector leaders should prompt a DTR talk about the role technology — not just AI — should play in K-12 classrooms. 

Education Evolution: AI and Learning

A recent “60 Minutes” profile highlighting Khanmigo’s potential is likely to drive greater interest in one of the most prominent AI products in the K-12 sector. The profile shows students using the chatbot in a chemistry class, along with a vignette where Anderson Cooper revises an essay he wrote as a sixth grader. It even includes an example of the most popular chatbot (ChatGPT) and its forthcoming “vision” making a mistake when calculating the area of a triangle drawn on a chalkboard. The profile ends with an aspirational message from Founder and CEO Sal Khan: 

“The hope here is that we can use AI and other technologies to amplify what a teacher can do so they can spend more time doing a good job standing next to a student, figuring them out, [and] having a person-to-person connection.”

It’s a hope worth working towards, but if you’re inclined to be an AI optimist, it’s worth pressure-testing your thinking with the perspectives of thoughtful critics. Dan Meyer weighed in with his skepticism coming out of the “60 Minutes” profile specifically and Marc Watkins highlights the challenges facing educators as students outsource their reading to ChatGPT. For an even wider lens, check out Benjamin Riley’s piece mapping out the different forms of AI skepticism.

🐘 A White Elephant Assortment of Edu-AI News

The Latest: AI Sector Updates

OpenAI’s shift from a nonprofit model to a public benefit corporation has garnered attention and some litigation, but more than anything else, the shift is about generating cash. CEO Sam Altman put it bluntly: “The simple thing was we just needed vastly more capital than we thought we could attract [as a nonprofit].”

Why? Costs associated with developing more advanced models and powering the growth of AI services are going up, up, up. Estimates from Bain & Company project that the market for Generative AI might reach nearly $1 trillion by 2027. It’s an eye-popping figure that reflects the growing energy, hardware, and software needs of this rapidly-growing sector.

But simply plowing more resources into computing capacity to train ever-larger Large Language Models (LLMs) might not be enough to continue the current trajectory of LLM improvement. Companies may need to pursue different approaches, such as improving how their models process information after they’ve been trained and/or focusing on turning current model capabilities into better products. 

It’ll be worth seeing if model scaling makes a comeback or if the sector doubles-down on more domain-specific and product-focused improvements in the year ahead.

🎁 ‘Tis the Season for New Models

Pioneering Policy: AI Governance, Regulation, Guidance, and More

For federal, state, and local governments, AI promises two things sorely needed among all the bureaucratic red tape: speed and efficiency. Wins like the Department of the Treasury’s recovery of $1 billion from fraudsters, in just one year, highlight this potential. Using AI for high-stakes decision-making generates apprehension and criticism, however, as Nevada found out when it used AI to identify at-risk students: An alarming statewide drop in the number of students identified — as well as the subsequent drop in school funding — sparked questions about transparency, fairness, and the appropriate role of algorithms. 

This tension is also apparent in the U.S. House of Representatives’ Bipartisan AI Task Force’s final report: while governments should “be wary of algorithm-informed decision-making,” the Congressional authors still recommend using AI to reduce administrative burden and bureaucracy. There seems to be an emerging fine line, but more guidance will be needed to actually determine its fault lines.

🔮 Post-Election Predictions: Looking Into the Crystal Ball

⚖️ To Regulate or Not to Regulate (continued…)

📚 New Resources

The international Organization for Economic Cooperation and Development also released a recent toolkit that looks at the landscape of approaches to AI in the public sector for G7 countries. The toolkit offers suggestions for ways to solve common implementation problems and a framework “journey” for implementing AI solutions.

More from this topic

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
ErrorHere