September 26, 2024

The Leading Indicator: Issue Four

By Alex Spurrier | Marisa Mission

Share this article

Before we get to the latest artificial intelligence (AI) news, we have some Bellwether-specific AI updates:

  • ICYMI: Our colleagues recently released a three-part Learning Systems series about the AI landscape in K-12 education — please read and share with your networks!
  • Bellwether is fielding a survey of district and charter school leaders to understand how they’re approaching the opportunities and challenges presented by AI. If you’re a district or charter leader, we’d love to hear from you! 

OpenAI recently released two new models — o1-preview and o1-mini — that are an impressive step forward for AI technology. To date, most Large Language Models (LLMs) responded to prompts by figuring out the next-best token to generate — an approach that can lead to “hallucinations” that go unnoticed by the LLM. The new o1 models take a different approach called “reinforcement learning,” in which the model first devises an approach to respond to a prompt and then executes those steps. As a result, o1 models are much more capable at solving logic-heavy tasks than other LLMs. They’re even pretty good at solving the daily New York Times word puzzle, Connections — a task where other LLMs consistently fall short of human performance.

This new step forward in AI technology opens up more possibilities for adults working in schools. AI applications that model and reflect on their “thought process” could not only supercharge the capacity of staff working outside of classrooms, but it might also provide educators with a novel resource that could help them reflect on and refine their own instruction. But as Dan Meyer notes, there’s still a major obstacle for LLMs to support students and capture their attention on a sustained basis.

The potential of ever-evolving LLMs is exciting, but let’s not forget about addressing the practical needs of the humans in the loop.

Education Evolution: AI and Learning

A new school year is underway and attention is returning to the use of AI in classrooms. We’re starting to see substantive research about where AI is truly helpful and, equally important, its limitations. In the last issue, we highlighted a study that showed how educational guardrails built into ChatGPT-4 could mitigate potential harm to students’ learning. Building on that idea, a Stanford University coding class conducted a randomized control trial that demonstrated surprising exam gains for students who used the offered GPT-4 chatbot, despite negatively impacting engagement. The University of California, Berkeley researchers investigated ways to combat chatbots’ math hallucinations and applied their approach to math tutoring with promising results. 

We’re also seeing more work highlighting how AI can help educators. One case study evaluated AI’s effect on lesson planning and found that teachers who used AI as a “thought partner” in addition to a “content generator” reported productivity gains. These results paint a rosy picture of classroom-based AI integration; however, as South Korea is finding out, parental skepticism could derail rollout of a AI tools before they even make their way into classrooms.

🛍️ Beating Buyer’s Remorse

♿ Unlocking AI’s Potential to Serve Students with Disabilities

🏛️ Higher Education Hype

  • Support for AI upskilling and research is ramping up across higher education, from community colleges to state universities to the Ivy League
  • Yet, a survey of 63 university leaders found that significant investment is still needed to match the growing enthusiasm.
  • In California, Gov. Gavin Newsom is spearheading a partnership with NVIDIA to bring AI laboratories, higher education curricula, professional development training, and industry certifications to the state’s public higher education and workforce systems.

The Latest: AI Sector Updates

The AI sector moves so fast that it’s not often we pause to reflect and look back. Claire Zau’s recent newsletter, Observations from the AI Model Olympics, offers us a chance to do so by summarizing the myriad developments released during summer 2024 alone. Professor Ethan Mollick also reflects on AI’s progress in his piece, Gradually, then Suddenly: Upon the Threshold, reminding us that while new AI models are often flashy and exciting, the “thresholds of actual use” are often invisible — even as they mark powerful milestones of technological progress. Finally, RAND offers some lessons to learn and heed in a report analyzing common root causes of failure for AI projects.

🏆 The Latest and Greatest

© When Imitation is NOT Flattery

🦺 Safety First

Pioneering Policy: AI Governance, Regulation, Guidance, and More

Back in Issue #2, we highlighted Jeremy Howard’s piece about California’s Senate Bill (SB) 1047 as an example of the challenges policymakers face in regulating AI. Just two months later, that bill has now passed both houses of the California State Legislature and is with Gov. Newsom, who is set to take action on it before the end of September 2024. The policy doesn’t look the same as it did earlier this summer and was amended to respond to criticism from AI companies and fellow legislators alike, including Rep. Nancy Pelosi. The bill aims to avoid “catastrophic harm” caused by large AI models, but even with the amendments, critics worry that the bill could stifle innovation and research by making developers liable for harms perpetrated by users. Whether or not Gov. Newsom signs SB 1047 into law, the core question of who should be responsible for potential harms resulting from generative AI is likely to surface again.

🇪🇺 Across the Pond: Compare and Contrast

  • On Aug. 1, 2024, the European Union (EU)’s AI Act went into effect. The law regulates different use cases of AI, with most classified as low/no risk. High-risk uses defined in the law include AI in education and employment; developers must now comply with quality assurance obligations and public authorities using high-risk systems will also need to register those systems in an EU database.
  • The Council of Europe’s Framework Convention on Artificial Intelligence touts itself as being the “first-ever international legally-binding treaty” for AI, meaning that theoretically, signatories would face consequences for violating its requirements. The U.S. is a signatory, but the Convention would not be enforced until five member signatories ratify it.

🔬 The Need for Speed in R&D

  • Over the summer, two former state chiefs of education published an urgent call for more R&D in K-12 education.
  • The next day, U.S. Senators Michael Bennet and John Cornyn introduced the bipartisan New Essential Education Discoveries (NEED) Act, which would create “a national center that advances high-risk, high-reward education research projects.”
  • These moves come on the heels of the Biden administration’s announcement of massive joint federal and philanthropic investments into public interest technology, including $32 million into the National Science Foundation’s Experiential Learning in Emerging and Novel Technologies program.

💰 Silicon Valley Carries a Big Stick

More from this topic