According to Gizmodo, OpenAI has updated its ChatGPT release notes to announce a major policy shift for non-paying users. In a December 11 update, the company stated it is “removing automatic model switching for reasoning in ChatGPT” for free users and ChatGPT Go subscribers. These users, including those on the $5-per-month Go plan available in select regions, will now have their prompts served by the GPT-5.2 Instant model by default. They can still manually select the more powerful “Thinking” model from a tools menu, but the system will no longer automatically route complex queries to it. This change reverses a previous approach where ChatGPT would decide which model to use based on the query’s perceived difficulty.
Cost-cutting in plain sight
Now, let’s be real here. OpenAI is framing this as giving users more “choice” and fixing a feature people supposedly hated—the automatic model picker. And sure, CEO Sam Altman did say “We hate the model picker as much as you do” after the “lobotomized” GPT-5 backlash. But come on. This is a classic cost-saving maneuver dressed up as a user experience improvement. They’re basically banking on the fact that most free users won’t bother digging into a menu to switch models. They’ll just accept whatever answer the cheaper, faster “workhorse” model spits out. For a company burning through cash to train ever-larger models, every query on a cheaper instance adds up.
A trade-off with real consequences
Here’s the thing that gives me pause. This isn’t just about getting a slightly less polished essay or code snippet. OpenAI previously noted that it would route sensitive queries—like those from users showing signs of mental distress—to the reasoning model because it handled them better. They claim GPT-5.2 Instant is now “better equipped” for those situations, so the change is safe. But is it? That’s a huge assumption to make, and one that places efficiency squarely ahead of potential user welfare. It highlights the inherent tension in running a free, massively popular AI service: you have to cut corners somewhere. The release notes call it “maximizing choice,” but the subtext is all about minimizing computational expense.
The freemium squeeze tightens
So what does this tell us about the trajectory? The era of generous, top-tier AI access for free is unequivocally over. We’re seeing the full-court press to convert users to paid plans. The ChatGPT Go plan itself, detailed in its own help article, is a limited, regional offering that now feels like a stepping stone with fewer perks. This move follows a pattern we’ve seen across the tech industry: bait with an amazing free product, then gradually wall off the best features behind a subscription. OpenAI is betting that the free tier’s utility, even on a cheaper model, is still compelling enough to serve as a funnel. But for power users who rely on the nuanced reasoning of the Thinking model, the friction just got a lot higher. The message is clear: if you want the good stuff, you’ve gotta pay.
