Join us on Nov. 14 at 2 p.m. Eastern for our webinar AI in Education: Key Takeaways From Bellwether’s Learning Systems.
It’s the peak of election season! If you need a break from refreshing poll aggregators, forecasting models, and prediction markets, grab your beverage of choice and cozy up with the latest news on artificial intelligence (AI) and education!
This fall, academia has seen some exciting developments powered by AI… and some troubling ones.
Nobel Prizes were granted for advanced AI systems (Physics) and AI-fueled scientific breakthroughs (Chemistry). Yet while the potential for AI to accelerate progress on these fronts is encouraging, not all AI-related academic news has been positive. An economist and education researcher at Brown University, Matthew Kraft, saw a working paper with a reference that listed him as the first author on a paper that he never wrote — likely the result of a large language model’s (LLM’s) hallucination.
Ethical dilemmas regarding AI’s use are already playing out in K-12. A student in Massachusetts was punished for using an AI tool to, as the student claims, research and outline an Advanced Placement U.S History essay. That student’s parents are suing the school district, saying their child didn’t break any existing school rules because the school’s AI policy went into effect the year after the incident. However, the student’s teacher found “many large cut and paste items” in the student’s first draft after anti-plagiarism software initially flagged the essay.
While this case may end up being more about overly-litigious parents than AI ethics, it does highlight some of the challenges with using AI. Anti-plagiarism software like Turnitin — the tool this school district uses — relies on AI to determine potential use of AI in student work. However, AI detection software frequently finds “false positives,” inaccurately identifying students as cheaters –– a phenomenon that one study found affects Black students at twice the rate of their white or Latino peers.
It’s not unreasonable to ask: What’s the “acceptable” rate of false positives in cheating detection software? How can companies and educators minimize potential bias inherent in this software? What policies or processes can schools adopt to reduce the need for plagiarism detection software in the first place?
There’s no clear-cut path forward to navigate thoughtful and ethical use of AI in K-12 schools. The embrace of robust honor codes might be a solution in settings where ethics are already ingrained in a school’s culture, but that approach likely isn’t scalable. A new Center on Reinventing Public Education (CRPE) report indicates that teacher preparation programs are slow to adapt to the possibilities and challenges AI poses in K-12 schools. Policies and institutions that evolved in the eras of blue books, Spark Notes, and search engines aren’t ready to meet the needs of the dawning epoch of AI.
A new trail must be blazed. Education leaders and policymakers should start with AI literacy to understand how these new tools actually work as well as their limitations and then build policies and other infrastructure to address ethical and practical considerations. Our colleagues’ Learning Systems work offers a set of waypoints to follow on this front. Absent work on these new challenges, students, educators, and schools will continue to encounter AI-related ethical issues without the guidance and support they deserve.
Education Evolution: AI and Learning
LLMs are revolutionary for their ability to write almost like a human, so it’s no surprise that many English and writing teachers are worried and pessimistic about its effects on students’ writing abilities. Pieces with titles like “I Quit Teaching Because of ChatGPT” and “ChatGPT Can Make English Teachers Feel Doomed” paint a bleak picture of student writing even for educators who are doing their best to adapt.
Yet AI writing still “isn’t ready” to take over for humans. Instead, many professionals are using AI as sounding boards. The New Yorker offered a thoughtful deep dive into this process, noting that ChatGPT didn’t make them any more efficient, but it altered the “psychological experience of writing.” In practice, ChatGPT was more of “a brain hack to make the act of writing feel less difficult.” For many professionals (including yours truly), this resonates: things written solely by AI are obviously bad, whether due to word choice, tone, hallucinations, etc., but the tool can still be helpful. Educators, however, may face an uphill battle helping students understand where the true value of AI lies when it comes to learning how to write.
💬 Language Processing Meets Language Teaching
- AI could be a game-changer for bilingual education, with a new mixed-methods study finding that AI-mediated language instruction led to “significantly higher” positive effects on students learning English.
- Backing that up is Duolingo’s founder Luis von Ahn, who is “all in on AI.” The popular language education platform recently unveiled several new AI-powered features, including video calls with an AI character and a real-world simulator called Adventures.
🖲️ Preventing Prejudiced Programming
- EdSurge points out that just 1 in 10 tech workers are Latino, despite Latinos being poised to make up 78% of new workers by 2030. The existing disparity, combined with uneven access to K-12 STEM coursework has advocates fearing that Latino students could be “left behind in AI and STEM jobs.”
- Research from the Stanford University’s Human-Centered Artificial Intelligence Center is finding that while LLMs are improving to be less “overtly racist,” they are instead becoming more “covertly racist.” These implications are backed up by two education-focused studies that consider how biases show up when generating instructional materials — the authors found indications of erasure, subordination, and harmful stereotypes.
- Digital Promise, which recently received an influx of money to lead AI education research (more below), published an Equity Framework to guide K-12, state, and higher education systems toward more inclusive innovations.
👪 Digital Natives, Analog Parents
- A recent survey from Common Sense Media found that 7 in 10 American teenagers had used generative AI tools, whereas only 37% of parents knew that their children were using the technology.
- The report also found that almost half (49%) of parents had not had conversations about generative AI with their children. However, teens who have had class discussions “are more likely to have nuanced views about its usefulness and challenges.” These findings underscore the value of and need for AI literacy education.
- These results build on an earlier survey that found students were not just using generative AI, but had formed a set of common perspectives and opinions on the tools.
The Latest: AI Sector Updates
Tech companies have invested boatloads of money into educational AI. Google’s thrown $120 million into a fund for global AI education, while Salesforce grants worth $23 million will go to schools across the U.S. to help equip students with the skills they need to use AI effectively. Ed tech companies are also benefiting on a global scale: startups Brisk and Prof. Jim raised a collective $8.4 million in venture capital rounds, and Roombr is partnering with Intel and Texas Instruments to reach 1 million classrooms in India. Even the federal government is getting in on the party: the Institute of Education Sciences (IES) is creating four new research centers focused on generative AI for K-12. Digital Promise got almost $10 million from IES to lead one of those centers, which will be focused on using AI to improve elementary reading instruction for English learners.
These investments are in line with a broader trend identified by Goldman Sachs: U.S. tech companies are spending about $200 billion on AI, roughly equivalent to a quarter of the entire European continent’s capital expenditures. But the money may be flowing inequitably: While California universities are seeing new coalitions and resources to expand digital infrastructure, historically Black colleges and universities need more “bold investments” to both counter historical underfunding and keep pace with digital evolution.
🤖 A Workforce Without Workers
- Salesforce’s new AI strategy doubles down on fully autonomous agents at the expense of human assistants. Salesforce is hedging for job losses in client organizations, which would jeopardize its current subscription-based model.
- Anthropic took a step toward that vision in releasing Claude for computers, an autonomous model that “represents a huge shift in AI use” since the AI model doesn’t require continuous prompting.
- Typically, the idea of AI replacing humans doesn’t bring to mind the C-suite, but a new experiment from Harvard Business School found that in simulations of real-world CEO decision-making, AI models outperformed humans on “nearly every metric.” The one difference? “Black swan events,” or crises that require adaptability, human intuition, and foresight.
- Nevada and Google are partnering to launch a generative AI system that will help caseworkers decide unemployment appeals claims. The state is hoping that generative AI can help work through a backlog of claims, but many are worried that the “emphasis on speed could undermine any human guardrails Nevada puts in place.”
📣 A PR Campaign for AI?
- Recently, top AI company execs have each published think pieces outlining their vision for the future of AI. Sam Altman (CEO of OpenAI) paints a very optimistic picture of “The Intelligence Age,” while Dario Amodei (CEO of Anthropic) does the same, but with a lot of specific details of what he expects to see.
- In education, Sal Khan’s new book “Brave New Worlds” hops on the optimism train, but as Robin Lake from CRPE points out, there wasn’t a lot of evidence for his assertions.
- And a former Spotify executive writes that the “Uncanny Valley” might not hinder AI adoption, with the imperfections being “a feature, not a bug.”
✒️ Copyright Conundrum, Continued
- In the previous issue of Leading Indicator, we highlighted the tension between AI models’ training data and creatives’ copyrights to their own work. The saga continues, with OpenAI giving lawyers access to its proprietary training data for the first time ever as part of a set of lawsuits from top authors. Meanwhile, a former OpenAI employee agreed that the company is violating copyright.
- To avoid future copyright controversies, OpenAI is also expanding its licensing agreements with publishers. Last month, we highlighted their partnership with Condé Nast, but since then, OpenAI has announced an agreement with Hearst that allows products like ChatGPT to display content from “outlets like the Houston Chronicle, the San Francisco Chronicle, Esquire, Cosmopolitan, Elle and others.” Meta has entered a similar deal with Reuters.
- And a new player has entered the chat: TollBit offers a marketplace where publishers can monetize the data AI companies scrape for training. The platform, which recently raised $24 million, offers an alternative to the licensing agreements OpenAI and Meta are pursuing.
Pioneering Policy: AI Governance, Regulation, Guidance, and More
Last month, we highlighted the EU’s recent AI Act, which went into effect on Aug. 1 and requires AI developers to complete quality assurance obligations for “high-risk” applications (e.g., in education and workforce contexts). Although the enforcement mechanisms and benchmarks for the regulation are still being developed, early testing found that the top AI models are scoring low on safety categories such as cybersecurity and nondiscriminatory output. A public leaderboard links to full reports on each model tested, with both a total score as well as individual scores aligned to each of the AI Act’s Ethical Principles. While the framework and testing were developed privately, these results offer an example of how regulations can be translated into technical requirements that make it easier for lawmakers to ensure public safety.
🐻 A New Operating System: California’s AI Rulebook
- Gov. Newsom vetoed the controversial Senate Bill 1047 (see last month’s issue for our coverage on this) at the eleventh hour last month, citing that the bill “applies stringent standards to even the most basic functions — so long as a large system deploys it.”
- At the same time, Gov. Newsom also signed into law 18 other AI-related bills regulating issues such as deepfakes, watermark disclosures, AI actor clones, privacy, and education.
- One of those bills is already running into trouble: Assembly Bill 2839 empowered California judges to order those who distributed deepfakes to take them down. A federal judge, however, recently blocked the bill from going into effect, citing that the law is too broad as written.
🤿 Deep(er)fakes
- Half of U.S. states are seeking to regulate the use of AI to influence elections; many of these regulations specifically target the use of deepfakes.
- Elections aren’t the only thing under threat, however: a growing number of schools are confronting students using deepfakes to create explicit imagery of their peers.
- It’s become enough of an issue that the White House is highlighting voluntary commitments from AI companies that take steps to “combat image-based sexual abuse.”
- Several companies’ commitments include making it easier to report and remove non-consensual intimate imagery, which could help districts and schools provide greater support to victims.
🏛️ From (and for) the Feds
- The U.S. Department of Labor released a new report outlining best practices for the use of AI by employers. The report emphasizes centering workers’ rights and safety, while “embracing possible innovation and production.”
- The National Association of State Boards of Education published a guide to connecting state policies to the National Educational Technology Plan.
The Learning Agency’s CEO published a call for an “education innovation agenda” from the next presidential administration, and offered suggestions for what to prioritize, no matter who wins the election.