This spring, you’ll hear plenty about “vibecoding” — the latest tech buzzword that describes the creation of fully-functional apps via artificial intelligence (AI) with little more than high-level descriptive input. Large Language Model (LLM) tools like GitHub Copilot are helping programmers do more of their work faster and in smaller teams, but vibecoding takes it a step further by allowing individual entrepreneurs to build out their ideas with greater speed. Vibecoding is already reshaping the startup landscape: nearly one-quarter of startups in the current Y Combinator cohort have codebases that are 95% AI-generated.
It’s not hard to imagine educators trying this approach to develop tools and materials for their classrooms — a new form of “vibe-agogy” where instructional design starts with broad goals and is then fleshed out with AI-generated content to cover content and standards. A teacher wouldn’t need to know Python or JavaScript to create an app for students, just the ability to articulate their pedagogical “vibes.”
While this could open up a new creative wave of educational tool development, the sector should proceed with caution. We’ve seen this movie before: a little over a decade ago, content-sharing platforms like Pinterest and Teachers Pay Teachers democratized the sharing and curation of educator-generated materials. While these tools are still wildly popular, researchers found that most of the content available on those websites is “mediocre” and “probably not worth using.” As Pinterest already suffers from a flood of spammy AI-generated recipes and DIY ideas, educational resources could face a similar fate: quantity even further overwhelming quality.
Vibe-agogy presents opportunities but also very clear risks for the education sector. How can educators harness the creative potential offered by new AI tools while maintaining educational integrity? First, the application of these tools should supplement, not supplant, a high-quality core curriculum. Additionally, educators should ensure that vibe-agogy doesn’t stray from what cognitive science shows us about how children learn most effectively.
The true test of vibe-agogy won’t be how many new, aesthetically-pleasing products it produces, but whether those tools genuinely enhance student learning outcomes in measurable, meaningful ways.
Education Evolution: AI and Learning
An EdWeek series, “How AI is Reshaping Teachers’ Jobs,” explores results from a survey of 990 educators across the country and details how AI is being deployed in schools. Some of the most common uses include grading, tutoring, coaching, and combating burnout, among others. Despite the uptick in adoption rates the series features, an early look at other survey data from the Center on Reinventing Public Education shows — unsurprisingly — that even early adopters of AI in education aren’t focused on systemic AI adoption, as they face overwhelm, limited resources, and uncertain legal or policy guidance. This reflects what we’re seeing across the K-12 landscape: incredibly uneven and/or lagging adoption as AI models and tools continue to evolve and gain traction, including among students.
In other news:
- The tech industry isn’t representative of the general population, which likely impacts how technology is designed to be/not be inclusive, as Dr. Allison Scott highlights in a recent podcast.
- The Disagreement hosted a debate on the role of AI in K-12 education between Benjamin Riley of Cognitive Resonance and Alex Kotran of The AI Education Project.
- More than 4,500 middle and high schools offer counseling services to students via an AI-powered chatbot, Sonny.
- Earlier this month, OpenAI announced a deepening of its partnership with higher education via a $50 million investment in partnerships with 15 research institutions — an initiative it calls “NextGenAI.”
- The University of South Florida is adding the nation’s first named school dedicated solely to AI and cybersecurity: the Bellini College of Artificial Intelligence, Cybersecurity, and Computing.
- Common Sense Media looks at how American teenagers’ trust in the tech sector is eroding due to AI. According to the report, 6 in 10 teens believe that tech companies will prioritize profit over well-being, and nearly half doubt that these companies will make responsible decisions about AI.
- In a new report, Pearson analyzes what it takes to prepare students with the workforce skills needed in an AI-enabled world. Among its conclusions: teaching not just “critical thinking,” but also metacognition and self-regulation (i.e., “learning to learn”), is the key to unlocking students’ ability to thrive in the future.
- Another approach to AI Literacy calls for schools to put a greater emphasis on foundational math skills, data science, and statistics so that students better understand the principles behind the technology, as featured in an EdWeek special report.
- At the same time, fears around AI accelerating student cheating and the downfall of productive struggle abound, as the Wall Street Journal unpacks.”
- The tension between AI for learning and workforce readiness versus AI for cheating is easily seen among college professors as well, who have mixed perspectives on AI use in their classrooms.
The Latest: AI Sector Updates
Another month, another Chinese AI tool in the headlines: A new agentic system, Manus.im, claims to be a completely autonomous AI model that does its computing in the cloud, allowing users to completely close their computer and come back to a fully formed output. An MIT Technology Review reporter found it exceptionally useful, but noted that it frequently crashed or ran into issues with CAPTCHAs, paywalls, etc. Meanwhile, Chinese powerhouse company Baidu (often referred to as the “Google of China”) has released a new model named Ernie X1 to compete with DeepSeek. And to continue spurring Chinese innovation, a major step for Beijing has been to require at least eight hours of AI instruction for students from elementary through high school. The announcement comes just six months after UNESCO published the Beijing Consensus on Artificial Intelligence (AI) and Education. As the U.S. and China compete for dominance of AI technology, keep an eye on how both nations approach integrating AI into primary and secondary education.
In other news:
- Mistral OCR takes Optical Character Recognition (OCR) capabilities to the next level, allowing AI models to scan, understand, and create complex documents that have varying forms of media, including tables, equations, code, and images. It’s now available for free on Mistral’s LLM, Le Chat.
- An OpenAI report finds that creating smarter and more capable AI models doesn’t mitigate the risk of them “misbehaving” to achieve results. And just like with humans, exerting pressure on an AI model to stop the misbehavior can lead it to actually hide its bad behavior instead.
- In the health care sector, researchers enhanced Google’s Articulate Medical Intelligence Explorer (AMIE) to support longitudinal disease management by grounding its reasoning in clinical guidelines and adapting to patient needs across multiple visits. In a randomized study, the enhanced AMIE matched or exceeded clinicians’ management reasoning during multivisit consultations and effectively planned investigations, treatments, and prescriptions.
- More from Google: Gemini’s Deep Research tool and Gems feature are now available for free to all users, with a new feature that can use your search history to personalize responses.
- Anthropic added its own advanced reasoning model with Claude 3.7 Sonnet, noting that its release follows a “different philosophy” by incorporating advanced reasoning models with the standard LLM. Anthropic also added web search to Claude 3.7 Sonnet, allowing access to more current information for its answers.
- Open AI released ChatGPT 4.5, which does’t use chain-of-thought reasoning but instead improves upon its predecessors by scaling compute (i.e., pre-training) rather than thinking time. OpenAI notes that GPT‑4.5 has better emotional intelligence than 4o, making it more natural to interact with than its predecessors.
Pioneering Policy: AI Governance, Regulation, Guidance, and More
Following executive orders (EOs) that revoked President Biden’s prior AI-related EO and called for a new AI plan, the Trump administration sent out a request for information on the development of an AI action plan, with comments due earlier this month. Major tech companies including Anthropic, OpenAI, Microsoft, and Google, among others, submitted comments. This report sums up the main themes and priorities from the companies’ filings. Among those, companies are generally advocating for 1) allowing models to be trained on publicly available information regardless of copyright, 2) increasing government adoption of AI, 3) tightening export controls to keep US companies competitive, and 4) investing in infrastructure that will allow AI to flourish.
In other news:
- The Joint California Policy Working Group on AI Frontier Models just released its draft report for Gov. Newsom. The report draws on research from multiple disciplines to distill a set of policy principles that will inform how California regulates frontier AI models. Anthropic has already responded, mostly in support of the suggested principles.
- Illinois state legislators introduced two AI bills (one in the House and one in the Senate) that would 1) create an advisory committee to generate and distribute guidance on using AI, 2) ask districts to include information on how AI is being used in schools in their annual reports to the State Board of Education, and 3) incorporate AI into an existing K-12 study unit on internet safety. While Chicago Public Schools has already published its own guidebook, this would be the first statewide AI guidance in Illinois.
- In news reminiscent of California’s Senate Bill 1047, legislation in Virginia would require human oversight of AI-based court decisions (HB1642) and regulation of “high-risk AI systems” (HB2094) passed both state houses. Gov. Youngkin vetoed the latter (HB2094) and issued Governor’s Recommendations for the former (HB1642).
- While lawsuits are pending in various courts across the country regarding AI models’ use of copyrighted materials, some publishers are negotiating deals with tech companies. In a Bloomberg opinion piece, author Alice Robb discusses the personal and long-term effects of accepting $2,500 to let OpenAI train its models on her book.
- Continued questions around how AI will impact SEO led to a new lawsuit against Google from ed tech company Chegg.