November 4, 2024

The Leading Indicator: Issue Five

By Alex Spurrier | Marisa Mission

Share this article

Join us on Nov. 14 at 2 p.m. Eastern for our webinar AI in Education: Key Takeaways From Bellwether’s Learning Systems

It’s the peak of election season! If you need a break from refreshing poll aggregators, forecasting models, and prediction markets, grab your beverage of choice and cozy up with the latest news on artificial intelligence (AI) and education! 

This fall, academia has seen some exciting developments powered by AI… and some troubling ones. 

Nobel Prizes were granted for advanced AI systems (Physics) and AI-fueled scientific breakthroughs (Chemistry). Yet while the potential for AI to accelerate progress on these fronts is encouraging, not all AI-related academic news has been positive. An economist and education researcher at Brown University, Matthew Kraft, saw a working paper with a reference that listed him as the first author on a paper that he never wrote — likely the result of a large language model’s (LLM’s) hallucination.

Ethical dilemmas regarding AI’s use are already playing out in K-12. A student in Massachusetts was punished for using an AI tool to, as the student claims, research and outline an Advanced Placement U.S History essay. That student’s parents are suing the school district, saying their child didn’t break any existing school rules because the school’s AI policy went into effect the year after the incident. However, the student’s teacher found “many large cut and paste items” in the student’s first draft after anti-plagiarism software initially flagged the essay. 

While this case may end up being more about overly-litigious parents than AI ethics, it does highlight some of the challenges with using AI. Anti-plagiarism software like Turnitin — the tool this school district uses — relies on AI to determine potential use of AI in student work. However, AI detection software frequently finds “false positives,” inaccurately identifying students as cheaters –– a phenomenon that one study found affects Black students at twice the rate of their white or Latino peers

It’s not unreasonable to ask: What’s the “acceptable” rate of false positives in cheating detection software? How can companies and educators minimize potential bias inherent in this software? What policies or processes can schools adopt to reduce the need for plagiarism detection software in the first place?

There’s no clear-cut path forward to navigate thoughtful and ethical use of AI in K-12 schools. The embrace of robust honor codes might be a solution in settings where ethics are already ingrained in a school’s culture, but that approach likely isn’t scalable. A new Center on Reinventing Public Education (CRPE) report indicates that teacher preparation programs are slow to adapt to the possibilities and challenges AI poses in K-12 schools. Policies and institutions that evolved in the eras of blue books, Spark Notes, and search engines aren’t ready to meet the needs of the dawning epoch of AI.

A new trail must be blazed. Education leaders and policymakers should start with AI literacy to understand how these new tools actually work as well as their limitations and then build policies and other infrastructure to address ethical and practical considerations. Our colleagues’ Learning Systems work offers a set of waypoints to follow on this front. Absent work on these new challenges, students, educators, and schools will continue to encounter AI-related ethical issues without the guidance and support they deserve.

Education Evolution: AI and Learning

LLMs are revolutionary for their ability to write almost like a human, so it’s no surprise that many English and writing teachers are worried and pessimistic about its effects on students’ writing abilities. Pieces with titles like “I Quit Teaching Because of ChatGPT” and “ChatGPT Can Make English Teachers Feel Doomed” paint a bleak picture of student writing even for educators who are doing their best to adapt. 

Yet AI writing still “isn’t ready” to take over for humans. Instead, many professionals are using AI as sounding boards. The New Yorker offered a thoughtful deep dive into this process, noting that ChatGPT didn’t make them any more efficient, but it altered the “psychological experience of writing.” In practice, ChatGPT was more of “a brain hack to make the act of writing feel less difficult.” For many professionals (including yours truly), this resonates: things written solely by AI are obviously bad, whether due to word choice, tone, hallucinations, etc., but the tool can still be helpful. Educators, however, may face an uphill battle helping students understand where the true value of AI lies when it comes to learning how to write.

💬 Language Processing Meets Language Teaching

🖲️ Preventing Prejudiced Programming

  • EdSurge points out that just 1 in 10 tech workers are Latino, despite Latinos being poised to make up 78% of new workers by 2030. The existing disparity, combined with uneven access to K-12 STEM coursework has advocates fearing that Latino students could be “left behind in AI and STEM jobs.”
  • Research from the Stanford University’s Human-Centered Artificial Intelligence Center is finding that while LLMs are improving to be less “overtly racist,” they are instead becoming more “covertly racist.” These implications are backed up by two education-focused studies that consider how biases show up when generating instructional materials — the authors found indications of erasure, subordination, and harmful stereotypes.
  • Digital Promise, which recently received an influx of money to lead AI education research (more below), published an Equity Framework to guide K-12, state, and higher education systems toward more inclusive innovations.

👪 Digital Natives, Analog Parents

The Latest: AI Sector Updates

Tech companies have invested boatloads of money into educational AI. Google’s thrown $120 million into a fund for global AI education, while Salesforce grants worth $23 million will go to schools across the U.S. to help equip students with the skills they need to use AI effectively. Ed tech companies are also benefiting on a global scale: startups Brisk and Prof. Jim raised a collective $8.4 million in venture capital rounds, and Roombr is partnering with Intel and Texas Instruments to reach 1 million classrooms in India. Even the federal government is getting in on the party: the Institute of Education Sciences (IES) is creating four new research centers focused on generative AI for K-12. Digital Promise got almost $10 million from IES to lead one of those centers, which will be focused on using AI to improve elementary reading instruction for English learners.

These investments are in line with a broader trend identified by Goldman Sachs: U.S. tech companies are spending about $200 billion on AI, roughly equivalent to a quarter of the entire European continent’s capital expenditures. But the money may be flowing inequitably: While California universities are seeing new coalitions and resources to expand digital infrastructure, historically Black colleges and universities need more “bold investments” to both counter historical underfunding and keep pace with digital evolution.

🤖 A Workforce Without Workers

📣 A PR Campaign for AI?

✒️ Copyright Conundrum, Continued

Pioneering Policy: AI Governance, Regulation, Guidance, and More

Last month, we highlighted the EU’s recent AI Act, which went into effect on Aug. 1 and requires AI developers to complete quality assurance obligations for “high-risk” applications (e.g., in education and workforce contexts). Although the enforcement mechanisms and benchmarks for the regulation are still being developed, early testing found that the top AI models are scoring low on safety categories such as cybersecurity and nondiscriminatory output. A public leaderboard links to full reports on each model tested, with both a total score as well as individual scores aligned to each of the AI Act’s Ethical Principles. While the framework and testing were developed privately, these results offer an example of how regulations can be translated into technical requirements that make it easier for lawmakers to ensure public safety.

🐻 A New Operating System: California’s AI Rulebook

🤿 Deep(er)fakes

🏛️ From (and for) the Feds

The Learning Agency’s CEO published a call for an “education innovation agenda” from the next presidential administration, and offered suggestions for what to prioritize, no matter who wins the election.

More from this topic