July 11, 2024

The Leading Indicator: Issue Two

By Alex Spurrier | Marisa Mission

Share this article

You might have heard this story before: The Los Angeles Unified School District (LAUSD) was an early adopter of a shiny new technology only to see the luster wear off quickly after significant vendor issues. A decade ago, it was the iPad fiasco. Today, it’s “Ed,” LAUSD’s artificial intelligence (AI) chatbot for students and families. Instead of living up to claims the bot would “transform education” districtwide, the company behind Ed, AllHere, announced major staff furloughs three months after Ed’s launch. On top of that, a whistleblower is warning that AllHere may have violated LAUSD’s data privacy policies. The 74 has great coverage on this story.

There are important differences in LAUSD’s iPad and AI experiences: The former involved partnering with two of the biggest names in tech and education (Apple and Pearson), while the latter involved a relatively small firm (AllHere). The iPad rollout was aimed at transforming the classroom experience, while the AI chatbot provides support beyond the school walls. Yet in both cases, LAUSD tried a big, fast move –– an approach that can work well when problems and solutions are well-understood or acute time constraints demand action. Neither are true when it comes to integrating new technologies into K-12 systems.

As was the case with one-on-one device programs in the aughts, education leaders must look beyond the hype surrounding new technology –– especially when the pace of development is fast, as is the case with AI. Instead of going big and fast as LAUSD has (twice), leaders interested in leveraging AI technology should start small. It doesn’t have to be a high-risk endeavor: Use pilot programs to address discrete problems with the support of AI tools, iterate based on feedback, and scale approaches that work. Doing so can help ensure that district leaders won’t get fooled again.

 

Education Evolution: AI and Learning

AI chatbots –– whether in the form of “co-pilots,” tutors, etc. –– can store and reference relevant information, allowing tailored interactions for a specific user. So, it’s no surprise that many see AI as the key to personalized learning for students and families. Beyond custom tutoring, AI might assist families with educational decision-making –– from suggesting fun educational activities to finding extracurricular programs to choosing a local school. Is that really possible, though? The latest installments of Bellwether’s Charting a Course series unpack what it would take to build an AI model that functions as a “navigator” or “navigator’s assistant” for families on their educational journey. Spoiler alert: it takes a lot of resources, and the jury’s out on whether an AI model can truly replace the relationship-building prowess and multi-modal interpretation skills of humans.

1️⃣ Personalized Learning and its Prerequisites

  • Home-schooling families are embracing AI copilots’ ability to brainstorm learning activities, manage portfolios, and help students explore complex topics independently.
  • A new AI-powered School Comparison Tool references and interprets public data on school progress measures to find a school that meets a specific family’s needs. 
  • The Institute of Education Sciences’ (IES) former Director Mark Schneider points out that truly personalized AI requires high-quality educational training data –– and IES is sitting on a goldmine of National Center for Education Statistics and National Assessment of Educational Progress data hidden by “archaic” barriers and systems. Data systems, he argues, must be modernized before high-quality, personalized learning from new technology can be delivered.

🧰 Newest Tools for Educators

🕸 With Great Power Comes Great Responsibility

  • Google researchers teamed up with students and educators to develop benchmarks and datasets that can improve the pedagogical skills of Large Language Models (LLMs). Their resulting framework could be used to evaluate AI models specifically for educational and pedagogical purposes.
  • The nation’s two largest teachers unions published their respective perspectives on AI in schools:
    • American Federation of Teachers’ members developed a set of “Commonsense Guardrails” to guide educators incorporating AI in their classrooms. The resource is available alongside the AI Educator Brain, a free training series designed to help teachers better understand AI in the classroom.
    • National Education Association members also approved a policy statement regarding AI in education, which accompanied a report with implementation recommendations.

The Latest: AI Sector Updates

AI models are playing leapfrog with their continuous improvement. Just one month after OpenAI’s GPT-4o was released, Anthropic dropped Claude 3.5 Sonnet, which competes with and arguably outperforms GPT-4o on multiple benchmarks. Anthropic’s focus on safety may even give Claude a slight edge over ChatGPT, especially in the education sector where student safety and privacy should be the highest priority. 

🤝🏼 The Road to Creating J.A.R.V.I.S.

  • A recent study from Stanford University’s Institute for Human-Centered AI (HAI) found that doctors preferred medical summaries written by trained AI models over summaries written by humans. This is promising given that “summarizing medical records is difficult, highly consequential, and detail-oriented work” –– sound familiar?
  • Hallucinations are a major concern for AI models, but Fortune’s Eye on AI posits that “many professionals will come to find copilots useful and helpful, even in cases when they aren’t entirely accurate … It’s not about whether the copilot can do well enough on its own to replace human workers. It’s about whether the human working with the copilot can perform better than they could on their own.”
  • A young adult on TikTok programmed his own version of Tony Stark’s J.A.R.V.I.S. and is publishing the blueprints and code for others to use. 

🟰 Looking Beyond Efficiency to…Equality?

  • University of Pennsylvania Wharton School professor Ethan Mollick cautions against efficiency for cost-savings’ sake, instead highlighting the value of AI’s “latent expertise.” He also points out that the best way to unlock that potential is through shared learning from everyday usage and trial-and-error by everyday professionals. 
  • Anthropic is hoping to facilitate that daily usage, too: Claude’s new Projects function allows teams to train Claude on an internal project-specific knowledge base.
  • In an optimistic piece earlier this year, a Massachusetts Institute of Technology professor of economics posited that AI has the potential to be an “inversion technology” that reverses income inequality by enabling greater access to “expert” skills and careers. 

🤑 Funding Flows

Pioneering Policy: AI Governance, Regulation, Guidance, and More

Just after GPT-4o’s release, two former OpenAI board members published a piece warning against AI companies’ self-governance, using OpenAI as an example. Two days later, OpenAI announced a new safety team…led by CEO Sam Altman. Given recent resignations by former OpenAI leaders citing safety concerns, some are skeptical about the new team. Two days after that, OpenAI’s board refuted the former board members’ claims and emphasized its commitment to safety, regulation, and oversight. Topping it all off, 13 former employees of OpenAI, Google DeepMind, and Anthropic published an open letter warning that AI companies are stifling criticism and calling for stronger whistleblower protections. This saga, which occurred over just two weeks, prompts us to wonder (yet again): In the absence of government regulation, who is responsible for oversight, and who is held accountable if (when?) AI systems cause harm?

📜 New Slew of Proposed Regulations

  • California’s State Legislature committees are considering more than 30 new bills aiming to protect consumer safety and intellectual property as AI technology proliferates. Some of the bills, however, seem “vague,” “impractical,” or downright “contradictory.” 
  • Meanwhile, in the U.S. Senate, a bipartisan “AI Education Act of 2024” bill was introduced in Mayto authorize and direct the National Science Foundation to develop guidance, scholarships, professional development, and other programming on AI education.

📚 Policymakers’ Must-Reads

You can read the first issue of the Leading Indicator here

More from this topic

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
ErrorHere