Before we get to the latest artificial intelligence (AI) news, we have some Bellwether-specific AI updates:
- ICYMI: Our colleagues recently released a three-part Learning Systems series about the AI landscape in K-12 education — please read and share with your networks!
- Bellwether is fielding a survey of district and charter school leaders to understand how they’re approaching the opportunities and challenges presented by AI. If you’re a district or charter leader, we’d love to hear from you!
OpenAI recently released two new models — o1-preview and o1-mini — that are an impressive step forward for AI technology. To date, most Large Language Models (LLMs) responded to prompts by figuring out the next-best token to generate — an approach that can lead to “hallucinations” that go unnoticed by the LLM. The new o1 models take a different approach called “reinforcement learning,” in which the model first devises an approach to respond to a prompt and then executes those steps. As a result, o1 models are much more capable at solving logic-heavy tasks than other LLMs. They’re even pretty good at solving the daily New York Times word puzzle, Connections — a task where other LLMs consistently fall short of human performance.
This new step forward in AI technology opens up more possibilities for adults working in schools. AI applications that model and reflect on their “thought process” could not only supercharge the capacity of staff working outside of classrooms, but it might also provide educators with a novel resource that could help them reflect on and refine their own instruction. But as Dan Meyer notes, there’s still a major obstacle for LLMs to support students and capture their attention on a sustained basis.
The potential of ever-evolving LLMs is exciting, but let’s not forget about addressing the practical needs of the humans in the loop.
Education Evolution: AI and Learning
A new school year is underway and attention is returning to the use of AI in classrooms. We’re starting to see substantive research about where AI is truly helpful and, equally important, its limitations. In the last issue, we highlighted a study that showed how educational guardrails built into ChatGPT-4 could mitigate potential harm to students’ learning. Building on that idea, a Stanford University coding class conducted a randomized control trial that demonstrated surprising exam gains for students who used the offered GPT-4 chatbot, despite negatively impacting engagement. The University of California, Berkeley researchers investigated ways to combat chatbots’ math hallucinations and applied their approach to math tutoring with promising results.
We’re also seeing more work highlighting how AI can help educators. One case study evaluated AI’s effect on lesson planning and found that teachers who used AI as a “thought partner” in addition to a “content generator” reported productivity gains. These results paint a rosy picture of classroom-based AI integration; however, as South Korea is finding out, parental skepticism could derail rollout of a AI tools before they even make their way into classrooms.
🛍️ Beating Buyer’s Remorse
- Navigating the investment and vendor landscape of ed tech is getting more complicated as districts and schools face the looming fiscal cliff.
- Guidelines from formal procurement benchmarks to warnings against “snake oil” offerings are meant to help district and state administrators avoid falling for the hype and potentially being left dealing with the consequences.
- Vetting vendors carefully — including looking at their funding sources — is critical as platforms incorporate AI for uses beyond learning, such as school safety.
- Cybersecurity is a key consideration, especially in light of ransomware attacks on student data. Knowing how ed tech products handle and protect student data can prevent situations like the one Los Angeles Unified School District is facing after its AI chatbot vendor disintegrated.
♿ Unlocking AI’s Potential to Serve Students with Disabilities
- Advocates for students with disabilities are particularly excited about AI’s potential to “revolutionize” special education interventions, especially for students with intellectual and developmental disabilities.
- But there is also concern that students with disabilities are being left out of conversations about AI.
- To help, New America and the Educating All Learners Alliance published this guide for policymakers as they consider the impacts of AI regulations on students with disabilities.
🏛️ Higher Education Hype
- Support for AI upskilling and research is ramping up across higher education, from community colleges to state universities to the Ivy League.
- Yet, a survey of 63 university leaders found that significant investment is still needed to match the growing enthusiasm.
- In California, Gov. Gavin Newsom is spearheading a partnership with NVIDIA to bring AI laboratories, higher education curricula, professional development training, and industry certifications to the state’s public higher education and workforce systems.
The Latest: AI Sector Updates
The AI sector moves so fast that it’s not often we pause to reflect and look back. Claire Zau’s recent newsletter, Observations from the AI Model Olympics, offers us a chance to do so by summarizing the myriad developments released during summer 2024 alone. Professor Ethan Mollick also reflects on AI’s progress in his piece, Gradually, then Suddenly: Upon the Threshold, reminding us that while new AI models are often flashy and exciting, the “thresholds of actual use” are often invisible — even as they mark powerful milestones of technological progress. Finally, RAND offers some lessons to learn and heed in a report analyzing common root causes of failure for AI projects.
🏆 The Latest and Greatest
- Apple’s September 2024 event included a focus on the integration of its AI, Apple Intelligence, into new versions of its operating systems — features that will find their way into schools as students and educators update their iPhones and iPads.
- Independent of OpenAI, Microsoft released its Phi-3.5 models, which surpass Google’s Gemini 1.5 Flash, Meta’s Llama 3.1, and OpenAI’s ChatGPT-4o on certain benchmarks.
- Gemini, nonetheless, received updates with improved image generation, voice mode, and custom chatbots called “Gems” that can act as “a team of experts.”
- X (formerly known as Twitter), released a chatbot named “Grok” that was originally designed to be a search assistant, but has been spreading misinformation. Its successors, Grok-2 and Grok-2 mini, are also being used to generate fake images with little oversight.
© When Imitation is NOT Flattery
- AI companies’ use of copyrighted materials in training data is garnering more scrutiny as 10 creatives’ class action lawsuit against tech startups is set to move forward.
- Yet, leaked documents from NVIDIA and comments from Google’s ex-CEO show that lawsuits won’t stop tech companies from scraping content anyway; in the end, legal recourse might be too late to mitigate harms.
- Many are trying to take preventative measures: YouTube is developing AI detection tools based on its existing copyright detection technology, and major publishers are opting out of web scraping wherever possible.
- Corporate partnerships, such as the one between OpenAI and Condé Nast, are one way for major news media outlets to regain control of copyrighted content, but smaller or homegrown websites and their authors might be left behind.
🦺 Safety First
- A group of 20+ prominent AI, legal, and tech industry thought leaders proposed the idea of Personhood Credentials (PHCs), which would help distinguish humans from AI bots while still protecting individuals’ privacy.
- Black in AI and Stanford University’s Institute for Human-Centered Artificial Intelligence released a report for the Congressional Black Caucus exploring the effect of AI on Black Americans.
- A University of Chicago survey of AI use by workers in Denmark highlights a gender gap, with women 20 percentage points less likely to use ChatGPT than men in the same occupation.
Pioneering Policy: AI Governance, Regulation, Guidance, and More
Back in Issue #2, we highlighted Jeremy Howard’s piece about California’s Senate Bill (SB) 1047 as an example of the challenges policymakers face in regulating AI. Just two months later, that bill has now passed both houses of the California State Legislature and is with Gov. Newsom, who is set to take action on it before the end of September 2024. The policy doesn’t look the same as it did earlier this summer and was amended to respond to criticism from AI companies and fellow legislators alike, including Rep. Nancy Pelosi. The bill aims to avoid “catastrophic harm” caused by large AI models, but even with the amendments, critics worry that the bill could stifle innovation and research by making developers liable for harms perpetrated by users. Whether or not Gov. Newsom signs SB 1047 into law, the core question of who should be responsible for potential harms resulting from generative AI is likely to surface again.
🇪🇺 Across the Pond: Compare and Contrast
- On Aug. 1, 2024, the European Union (EU)’s AI Act went into effect. The law regulates different use cases of AI, with most classified as low/no risk. High-risk uses defined in the law include AI in education and employment; developers must now comply with quality assurance obligations and public authorities using high-risk systems will also need to register those systems in an EU database.
- The Council of Europe’s Framework Convention on Artificial Intelligence touts itself as being the “first-ever international legally-binding treaty” for AI, meaning that theoretically, signatories would face consequences for violating its requirements. The U.S. is a signatory, but the Convention would not be enforced until five member signatories ratify it.
🔬 The Need for Speed in R&D
- Over the summer, two former state chiefs of education published an urgent call for more R&D in K-12 education.
- The next day, U.S. Senators Michael Bennet and John Cornyn introduced the bipartisan New Essential Education Discoveries (NEED) Act, which would create “a national center that advances high-risk, high-reward education research projects.”
- These moves come on the heels of the Biden administration’s announcement of massive joint federal and philanthropic investments into public interest technology, including $32 million into the National Science Foundation’s Experiential Learning in Emerging and Novel Technologies program.
💰 Silicon Valley Carries a Big Stick
- As the 2024 election draws closer, AI companies are increasing their lobbying efforts, drawing skepticism about their contributions to future tech regulations.
- Lobbying efforts aren’t just aimed at Washington, D.C., either: A tech firm offering AI-powered gun-detection software seems to have gained business from school districts by courting state legislators.
- Tech leaders’ outsized influence ranges from deep pockets to huge user bases, which can be used to “bully” regulators and harm public interest, argues Marietje Schaake, the international policy director at Stanford University’s Cyber Policy Center.