The $20 AI subscription era has become untenable

Summary created by Smart Answers AIIn summary:PCWorld reports that current $20 flat-rate AI subscriptions from OpenAI, Anthropic, and others are becoming financially unsustainable for providers.GitHub Copilot has already switched to expensive usage-based pricing, while Anthropic considers removing advanced features from Claude Pro plans.Users should expect significant price increases as the true cost of powerful AI agents far exceeds current subscription fees. Welcome to the inaugural edition of PCWorld’s latest newsletter! The subject: AI, and how it’s coming to change our world, whether we like it or not. The best way to prepare for the coming AI era is to use AI, every day, to figure out what works and what doesn’t. I’m here to help. I’m Ben Patterson, and I’ll be your host. Each week, I’ll be covering need-to-know AI trends from a consumer perspective, including practical AI tips, hands-on experiences with the latest AI tools, and prompts to help you get the most out of your AI chats. If you want the latest issue in your inbox each week, just sign up right here. The name of our AI newsletter? Well…we’re still working on that. (Names are tough!) If you’ve got a great idea for a name, drop me a line or hit us up on social. We’re all ears. The most powerful AI features, and particularly those involving agents, are a lot more magical when you get to use them for cheap. That’s what’s been happening with flat-rate AI plans like ChatGPT Plus and Pro, Claude Pro and Max, and Google AI Pro and Max. For $200, $100, or even just $20 a month, AI users–myself included–have been taking a joy ride with OpenAI’s Codex, Anthropic’s Claude Code, Claude Cowork, and Claude Design, not to mention Google’s Antigravity, Nano Banana 2, and NotebookLLM. From coding tools that build apps with a prompt to desktop AI assistants that create and edit files on their own, these tools deploy teams of agents that can work wonders in seconds, both dazzling us and scaring us (AI can do my job better than me, I’m cooked!) in equal measure. But a big part of what made these AI-powered feats so heady was that they were so cheap. All this app building, web designing, and image creation for as little than $20? Are you kidding me? Well, it turns out they were kidding. Microsoft-owned GitHub is the most visible AI provider to have burst this particular AI bubble (as I wrote Tuesday), switching all its flat-rate plans to much more expensive usage-based models while say out loud what everyone’s been thinking: the current crop of “Plus,” “Pro,” and “Max” AI plans are broken, busted, and unsustainable.  Anthropic has been dropping hints about this inconvenient truth as well, with the company’s Head of Growth (who may have been a little too good at his job) stating that the flat-rate Claude Pro and Max plans “weren’t built” for agentic tools like Claude Code and Cowork. What they were built for was chat, and only chat. Now Anthropic is testing the idea of dropping Claude Code from its Pro plan, while tinkering with the usage allowances of Pro and Max users, trying to find a combination that makes those plans economically feasible.  And while OpenAI’s Sam Altman has been sounding notes of defiance, practically daring Anthropic to downgrade its flat-rate plans, it’s hard to imagine that ChatGPT Plus and Pro won’t eventually follow suit. The upshot is this: We’re all about to find out how expensive AI really is. And when we realize that personal AI assistants from the likes of Anthropic, OpenAI, and Perplexity will cost us not $20, not $100, but hundreds of dollars a month (and you can add more zeros for business and enterprise users), the magic will give way to cold, hard reality. More in AI this week Why did OpenAI instruct its latest GPT models to never, ever talk about goblins, gremlins, and other diminutive creatures? Here’s the reason (as I shared Thursday). You’re not nuts for saying “please” and “thanks” to AI. New research says an AI model in a high well-being “state” is more likely to stay positive and engaged, while “unhappy” models may try to evade negative interactions. GPT-5.5, ChatGPT’s latest and most powerful model yet, doesn’t require the hand-holding that older models did. But it also gets fussy with the longer, highly detailed prompts that might have worked well in the past. Check out some prompts that are ready for GPT-5.5. Talkie-1930 is a vintage AI model that was trained only on pre-1930 data. Talking to it is like talking to a person from the past, in both good ways and bad (its outputs can be offensive, so beware). Talkie-1930’s purpose: to gain more insight into how modern AI models work (see the official paper). The civil trial between Elon Musk and Sam Altman is underway, and as expected, it’s more a clash of egos than anything else. I’m not terribly interested in billionaires slinging mud at each other over AI, but here’s the latest if you want to dig in (from The New York Times). I asked ChatGPT and Claude to book dinner reservations for me. It didn’t go well. If you have a complex task for an AI, the last thing you want to do is give it a fuzzy prompt; doing so is a recipe for getting a fuzzy result. Indeed, the bigger the ask, the more detailed your AI prompt should be. Sounds daunting? If so, here’s a pre-prompt to help compose your final prompt.  This “prompt decomposition meta-prompt” directs the AI to take your task and break it down into its component parts, pinpointing the crucial definitions of the project. In prompt engineering, this process is known as “decomposition,” and it’s a great way to see how the AI is “thinking” about the task you’ve given it. That’s all for now! Thanks for reading our very first, soon-to-be-named AI newsletter. If you want more like this each week, don’t forget to sign up. See you next time.

Comments (0)

AI Article