According to Futurism, tech giants like Microsoft, OpenAI, and Anthropic are pouring millions into the education system to push their AI products. Microsoft, OpenAI, and Anthropic gave over $23 million to a major U.S. teachers union for AI training. In practice, Miami-Dade County Public Schools deployed Google’s Gemini chatbot for over 100,000 high school students. Abroad, Elon Musk’s xAI announced a deal last month to deploy its Grok chatbot in over 5,000 public schools in El Salvador. This push is happening despite emerging research, including a study from Microsoft and Carnegie Mellon, suggesting AI can atrophy critical thinking skills. Furthermore, OpenAI’s own data reportedly indicated that up to half a million ChatGPT users were having conversations showing signs of psychosis.
The Rush to Market Over Safety
Here’s the thing: this feels like a massive, ethically dubious beta test. The companies funding this push have, by their own admission, struggled to control their own tools. OpenAI knows about the psychosis risk data, but that hasn’t stopped the push into schools—or even, as noted, into powering children’s toys. So we have to ask: are they preparing students for the future, or are they simply locking in their market share with a captive audience of impressionable users? The justification is always about “preparation” and “acceleration,” but the timeline is suspicious. They’re moving in before any consensus on safety or efficacy can be formed. It’s a land grab, plain and simple.
A History of Failed EdTech Hype
And this isn’t the first time we’ve seen this movie. The article points to the One Laptop per Child initiative as a stark warning. Studies showed that simply dumping technology into classrooms didn’t improve scores or cognitive abilities. It was a waste of money that led to poor learning outcomes. Now, experts like UNICEF’s Steven Vosloo warn that “unguided use of AI systems may actively de-skill students and teachers.” We’re replacing one unproven tech solution with another, far more powerful and unpredictable one. The calculator analogy that tech boosters love is laughable. A calculator is a tool for arithmetic. An AI chatbot is a tool that does the thinking—the writing, the reasoning, the research—for you. That’s a fundamental shift in what “learning” even means.
The Real World Is Already Showing Cracks
Look at what’s happening outside the classroom. The phenomenon of “AI psychosis,” where interactions with chatbots trigger dangerous delusional spirals, is getting real clinical attention. And the users most vulnerable? Teens and young adults. So we’re introducing these same systems, built on the same underlying technology, into the high-stress, socially complex environment of high school. What could possibly go wrong? We’re still grappling with the immense damage social media did to a generation’s mental health. Now, before we’ve even processed that disaster, the same industry wants to embed an even more intimate, persuasive technology into the school day. It’s reckless.
So What’s the Alternative?
I’m not a Luddite. The argument for teaching *about* AI in a controlled setting has merit. But there’s a canyon of difference between a critical media literacy course and deploying a branded chatbot to 100,000 kids as a default tool. The current approach outsources the hard work of education—the cognitive struggle that actually builds the mind—to a black-box algorithm owned by a for-profit corporation. Basically, we’re letting companies whose primary goal is engagement and profit design the cognitive habits of the next generation. That should terrify anyone. The evidence on what works in edtech is shaky, and the risks here are profound. Maybe we should figure out if it’s safe, or even helpful, before we bet our children’s education on it.
