OpenAI’s recent announcement of ChatGPT Pro at $200 per month marks a significant milestone in the AI industry. While this price point might seem high compared to current consumer AI subscriptions, it actually reveals a fascinating economic pattern that will likely shape the future of AI adoption and spending.
The new tier provides access to OpenAI’s most advanced model, promising improved reliability particularly in areas like data science, programming, and case law analysis. Their benchmarks show remarkable improvements – for instance, achieving 86% accuracy on competition math problems compared to 78% for their standard model. But the real story here isn’t just about performance improvements – it’s about how these improvements will drive total compute consumption.
Source: OpenAI
The Jevons Paradox
This brings us to Jevons paradox, a principle from the coal industry that perfectly applies to our AI future. William Stanley Jevons observed that when coal use became more efficient, instead of using less coal overall, society ended up using more. The same pattern is emerging with AI compute.
Here’s why: Probably, most white-collar jobs today are technically replaceable by AI, but there’s a crucial gap between possibility and implementation. Converting human procedures into AI procedures requires significant effort and investment. This effort only makes economic sense when the potential savings exceed the implementation costs. At current pricing levels, this equation works primarily for large organizations with thousands of employees performing similar, predefined tasks.
But as models become more capable and reliable, something interesting happens. The implementation effort decreases, while the value generated increases. This is where the paradox kicks in. Even if future models cost $2,000 per month – ten times current pricing – they would still make economic sense for more use cases because they would require less human oversight and implementation effort.
The economic logic is straightforward: If a model can reliably automate tasks that currently require high-paid professionals, even a $2,000 monthly subscription represents a fraction of the potential savings. More importantly, as these models become more reliable, they can be applied to increasingly critical tasks, expanding the total addressable market.
AI Agents Emerge
This economic equation becomes even more compelling when we consider the emerging landscape of AI agents and automated workflows. Unlike chat interfaces where humans drive the interaction, API-connected agents can work continuously, orchestrating complex sequences of actions across multiple tools and systems. These agents might analyze thousands of documents, monitor data streams, handle customer inquiries, or manage entire business processes – each action consuming compute tokens. A single automated workflow could easily involve billions of tokens daily, making even higher compute costs economically viable given the scale and speed of automation achieved. This marks a shift from human-guided AI assistance to truly autonomous AI operations, where compute consumption grows with every automated task and decision.
This trend is already visible in the data. OpenRouter, a service that lets users switch easily between AI models, shows an explosion in token consumption – from roughly 8 billion tokens to over 300 billion tokens per week in a year. While part of this growth comes from the service’s increasing popularity, it demonstrates the dramatic acceleration in automated AI usage.
Source: OpenRouter
This creates a cascade effect. As larger organizations adopt these systems, they establish new baselines for operational efficiency. Their competitors must follow suit or risk becoming uncompetitive. Meanwhile, the tools and practices developed for these implementations gradually become standardized and more accessible, eventually reaching smaller organizations.
The pattern we’re seeing suggests that while individual compute costs per task will continue to decrease, total compute spending will increase dramatically. Organizations will use AI for more tasks, more complex problems, and more critical decisions. They’ll pay more for better models because the economic value proposition remains strong even at higher price points.
This transition won’t happen overnight. Different sectors and organizations will move at different speeds, based on their size, complexity, and competitive environment. The smallest companies might wait years before adoption makes economic sense for them. But for large organizations with substantial white-collar workforces, the economic incentives are already clear and will only become more compelling as models improve.
Understanding this pattern helps explain why companies are increasing their AI investments despite rising costs. They’re not just paying for current capabilities – they’re positioning themselves for a future where AI compute becomes as essential to business operations as electricity is today. In that future, just as with electricity, we’ll likely use far more compute than today, even if each individual operation becomes more efficient.
The Big Investment Opportunity
This economic pattern explains why I’m particularly enthusiastic about semiconductor manufacturers, especially those with fabrication capabilities. Only three companies in the world (TSMC, Intel, and Samsung) can manufacture the leading-edge chips needed for these advanced AI models. As AI compute demand grows following Jevons paradox, these companies become critical bottlenecks in the entire AI value chain.
This isn’t just about picking winners in AI – it’s about owning the companies that every AI player must rely on, regardless of which models or applications ultimately dominate. As the economics of AI drive increased total compute consumption, the companies controlling these essential manufacturing capabilities are positioned to capture an increasing share of the value.
Disclaimer: The author owns TSMC, Samsung, and Intel shares at the time of publication.
Other semiconductor stocks owned: Micron, ARM, Qualcomm, ASML, AMD
Discover more from FinAI
Subscribe to get the latest posts sent to your email.
DimiAI
https://finance.yahoo.com/news/openai-announces-o3-models-175657596.html
https://www.reddit.com/r/singularity/comments/1hisp7o/o3_high_compute_costs_is_insane_3000_for_a_single/
Models are becoming better and more efficient, but compute is exploding. o3 a great example.
DimAI
https://www.reddit.com/r/LocalLLaMA/comments/1iehstw/gpu_pricing_is_spiking_as_people_rush_to_selfhost/
Wall Street is panicking, but Jevons Paradox is happening!