In 2025, the AI investment wave reached new heights. Global AI spending hit $1.5 trillion. Venture capital poured $202 billion into AI startups – a 74% increase from 2024. The hyperscalers – Microsoft, Amazon, Google, Meta – committed over $300 billion in CapEx. The Stargate Project announced $500 billion over four years. When you see numbers like these, one question seems obvious: Is this a bubble?
I think it’s one of the most commonly asked questions on Wall Street right now. And I understand why people are skeptical. Many are predicting a bust in 2026.
People can clearly see the massive investments happening. For the first time this century, we’re seeing capital deployed at this scale and this speed into a single technology sector. Nobody can disagree with that observation.
So when I say “this time is different” – I know those are dangerous words. Historically, that phrase has preceded some spectacular losses. But I want to give you 9 reasons why I think this time is actually different. Not because bubbles can’t happen in AI (they can, and some money will definitely be lost), but because the scale and trajectory might not be what the skeptics think.
1. The Investment Is Smaller Than It Looks
Let’s start with scale. When you hear “$1.5 trillion in AI spending,” it sounds enormous. But consider this: that’s approximately $185 per person on Earth per year. The global GDP is about $105 trillion, which works out to roughly $13,000 per person. So even $1.5 trillion – which sounds astronomical – is still just 1.4% of annual global GDP.
For a technology that could transform how every industry operates, is 1.4% of GDP really excessive?
Compare this to other technology and infrastructure investments:
| Sector | Annual Investment (2025) |
|---|---|
| Total Energy | ~$3.4 trillion |
| Clean Energy | ~$2.3 trillion |
| Oil & Gas | ~$1.1 trillion |
| AI Total Spending | ~$1.5 trillion |
| Hyperscaler CapEx | ~$300+ billion |
Sources: IEA World Energy Investment 2025, Gartner AI Spending
AI investment, while growing rapidly, is still less than half of what we spend on energy infrastructure. The “bubble” is smaller than it looks when you put it in perspective.

2. The Public Dramatically Underestimates What’s Possible
Here’s something that keeps striking me: most people don’t have a good grip on what LLMs can actually do. They’ve tried ChatGPT, maybe asked it to write a poem or answer a trivia question, and formed their opinion.
Very few know that Anthropic’s Claude is generally superior for complex reasoning tasks. Fewer still know how far Google’s models have come. And almost nobody outside the industry understands what happens when you combine these models with proper scaffolding.
What do I mean by scaffolding? Think of it like this: an electric motor sitting on a table can’t take you anywhere. Can an electric motor drive you from London to Manchester? Of course not. But connect that motor to wheels, a chassis, and a steering system – now you have a car.
This is the state of AI today. People try something on ChatGPT, it fails at some complex task, and they laugh: “See? AI can’t do X, Y, or Z.” Meanwhile, people working in the industry will tell you it absolutely can. The difference is scaffolding.
Here’s what most people miss: if you look at what’s happening in coding and software development, you’re seeing the future of every industry. Agents – software that uses LLMs to take actions, not just chat – are already transforming how code gets written. These agents don’t just answer questions. They read files, execute code, search the web, and iterate on solutions. They use tools.
The key insight is that LLMs are processors, not databases. When someone complains “ChatGPT hallucinated a fact,” they’re misunderstanding what the technology is. LLMs process and reason – they’re not meant to be encyclopedias. When you give them access to real data through tools and scaffolding, the “hallucination” problem largely disappears because they’re processing actual information, not retrieving from training data.
What people see on free ChatGPT is the motor on a table. What companies are building internally is the car. Code mixing with LLMs. LLMs writing code that executes. Agents that plan, research, and act. The gap between public perception and actual capability is enormous.
3. Software Development Leads, Then Spreads – The Internet Pattern
Here’s a pattern worth understanding: software and coding tools are leading AI adoption. If you want to understand where things are going, look at developer tools first.
Why? Two reasons.
First, code is verifiable. When an LLM writes code, you can run it and check if it works. This makes training data perfect for improving the models. The feedback loop is tight.
Second, the people building scaffolding for LLMs are engineers. The same type of people who built the internet.
Think about early internet history. In 1995, there were no hunting forums. No cooking communities. No fashion blogs. The first forums and websites were about engineering and the internet itself. The first products sold online were IT products. The first services were things like website building. Engineers built for engineers.
Why? Because engineers were the ones who understood the technology well enough to build on it.
Today’s AI coding tools – Claude Code, Cursor, GitHub Copilot – are 2-3 years ahead of what exists in any other industry. I can tell you from direct experience: the things these tools can do are extraordinary. They’re not just autocomplete. They reason through problems, they plan multi-step solutions, they debug their own mistakes.
These capabilities haven’t spread to other industries yet. A lawyer doesn’t have an AI agent that can research case law, draft arguments, and file motions. A doctor doesn’t have an AI agent that can review patient history, order tests, and suggest treatments. But the engineers building these tools? They have this already for their work.
Now look at the internet today. The biggest communities and most-used tools have nothing to do with IT. Social media, e-commerce, entertainment – the internet expanded far beyond its engineering origins.
Expect the same with AI. We’re in the early phase where engineers build for engineers. But this will spread. We have seen nothing yet.
4. The Revenue Is Real
This isn’t just hype and investment. Real revenue is being generated.
Anthropic’s growth tells the story. According to multiple reports (Fortune, The Information, Sherwood):
| Period | Approximate ARR |
|---|---|
| 2022 | ~$10 million |
| 2023 | ~$150 million |
| End of 2024 | ~$1 billion |
| August 2025 | ~$5 billion |
| October 2025 | ~$7 billion |
| December 2025 | ~$9 billion (on track) |
Note: These are estimates based on media reports, not official disclosures.
That’s roughly 9x growth in 2025 alone. Claude Code – Anthropic’s coding tool – is now generating close to $1 billion in annualized revenue by itself, with usage growing 10x in just three months.
And here’s something crucial: about 70-75% of Anthropic’s revenue comes from API calls. That’s not consumers chatting with Claude for fun. That’s companies building real applications that use AI – and paying for every token.

When the skeptics dismiss AI as pure speculation, I point to numbers like these. Someone is paying. A lot of someones. They’re not paying for hype – they’re paying because AI solves problems faster or cheaper than alternatives.
There’s another angle to this. In traditional computing – whether motors, servers, or anything else – inference (running the system in production) typically accounts for about 80% of the computational load. Training gets the headlines, but running the models is where the sustained demand sits. The revenue Anthropic is generating is largely inference revenue – real usage, not one-time training costs.
5. The Infrastructure Math Proves Demand
Here’s where the infrastructure story gets interesting. The companies building AI know something the skeptics don’t: the infrastructure has to be built ahead of demand – and they’re racing to build it.
Current estimates put the AI inference market at roughly $106 billion in 2025, growing to perhaps $250-380 billion by 2030. That’s 2.5-4x growth, which sounds modest. But this may underestimate the explosion in agentic AI use cases that are just beginning.
Look at what the big players are actually spending:
| Company | Announced Investment |
|---|---|
| Microsoft | $80 billion (2025, AI data centers) |
| Apple | $500 billion (multi-year US) |
| Stargate Project | $500 billion (4 years) |
| Amazon | ~$50 billion (AI data centers) |
The hyperscaler CapEx trajectory tells the story: $100 billion in 2023, rising to $300+ billion in 2025, potentially exceeding $500 billion in the next few years. These companies aren’t stupid. They’re betting hundreds of billions because they see the demand coming.
6. AI Will Expand GDP Itself
Here’s an argument the bubble skeptics often miss: AI isn’t just competing for a share of existing economic activity. It will create new economic activity.
Think about historical precedent. Electricity didn’t just replace candles – it enabled refrigeration, assembly lines, and eventually entire industries that couldn’t exist before. The internet didn’t just replace letters and newspapers – it enabled e-commerce, social media, and the entire app economy.
AI has the same characteristic. Tasks that were previously economically unviable – because they required too much human time or expertise – now become feasible. A small business that couldn’t afford a legal team can now get AI to review contracts. A solo developer can build applications that would have required a team of ten. A researcher can analyze datasets that would have taken months to process.
This creates an economic multiplier. AI spending doesn’t just redistribute existing GDP – it expands what’s economically possible. The total addressable market for AI isn’t a fixed pie; it’s a pie that grows as AI capabilities improve.
This is why comparing AI investment to historical bubbles is misleading. Tulip bulbs didn’t create new economic activity. AI does.
7. Software Is Becoming Cheaper to Produce
There’s a virtuous cycle at play that many people haven’t fully appreciated yet.
AI coding tools – Claude Code, Cursor, GitHub Copilot – are making software development dramatically faster. What used to require a team of ten can now be done by two or three people. Tasks that took days now take hours. The productivity multiplier varies, but estimates range from 2x to 10x depending on the task.
This has a compounding effect on AI adoption itself. As software becomes cheaper to produce:
- More companies can afford to build AI-powered applications
- More use cases become economically viable
- More software gets built overall
- More demand for AI infrastructure results
The skeptic’s argument is often: “AI will eliminate jobs and reduce economic activity.” But the historical pattern with automation is the opposite – it expands what’s economically feasible, creating more activity, not less.
Consider what’s happening in practice. Companies that couldn’t justify hiring developers can now build custom software with AI assistance. Internal tools that would never get budget approval now get built in weekends. The barrier to software creation is collapsing.
This means more software, built faster, at lower cost – which drives more demand for compute, not less. It’s a virtuous cycle that accelerates AI infrastructure needs.
8. The Jevons Paradox: Cheaper AI Means More Spending, Not Less
There’s a counterintuitive economic principle that keeps proving itself in technology: when something becomes more efficient, we don’t use less of it – we use more.
This is the Jevons Paradox, named after the 19th-century economist who observed that as coal engines became more efficient, total coal consumption increased rather than decreased.
We’re seeing exactly this with AI. Models like DeepSeek have made certain AI capabilities dramatically cheaper. The naive prediction would be: “Great, companies will spend less on AI.” But that’s not what’s happening.
What actually happens is threefold:
First, cheaper models expand the addressable market. Tasks that were too expensive to automate with frontier models become economical with cheaper alternatives. Traditional machine learning systems, rule-based classifiers, and manual processes all get replaced. The total volume of AI usage explodes.
Second, for high-value work, people don’t settle for “good enough.” If you’re a developer, you don’t use the cheapest model that can technically write code. You use the best model available because the value of excellent code far exceeds the cost difference. Claude Code costs $200/month, but if it saves you 20 hours of work, that’s worth $1,000-2,000 at typical developer rates. People pay for frontier models because the ROI is obvious.
Third, we’re in the very early days. Most businesses haven’t even begun to adopt AI at scale. As models get cheaper and better, adoption accelerates. More people use AI. More use cases emerge. The total spending goes up, not down.
This is why the “AI efficiency will crash semiconductor demand” thesis is backwards. Efficiency drives adoption. Adoption drives demand. We’ve seen this pattern with every major computing wave – PCs, mobile, cloud. AI will be no different.
9. Short-Term Cycles Won’t Kill the Trajectory
Even if we see downcycles (and we will), they won’t change the underlying trajectory.
When I say “short term” as a long-term investor, I mean these could be AI crises lasting 1-2 years. Then things go back up.
The underlying demand is real. The technology works. The use cases exist. A fab constraint doesn’t make AI less useful – it just creates a temporary bottleneck. A funding crisis doesn’t make the technology worthless – it just slows the pace of development.
This won’t be a straight line up. We’re already seeing cycles in RAM prices. Expect more volatility. But volatility isn’t the same as a bubble bursting. It’s the normal rhythm of a technology buildout.
Let me share my intuition on the range of outcomes. The inference market today is about $106 billion annually. Looking at the GDP-level potential of AI, and how infrastructure must be built ahead of demand, I believe in the next five years we could see annual inference spending reach anywhere from $1 trillion to $10 trillion.

That’s a wild range, I know. But both ends seem plausible to me. The conservative case: AI becomes important but remains somewhat niche, and we get to $1 trillion. The optimistic case: AI becomes the primary interface for most knowledge work, and we approach $10 trillion. Either way, that’s 10x to 100x from today.
If you need to build infrastructure to support $1-10 trillion in annual inference, you need to invest several trillion in infrastructure over the next few years. That’s exactly what we’re seeing. A few trillion in annual infrastructure spending isn’t a bubble – it’s building ahead of demand.
The Risks: Where Things Could Go Wrong
I’m not here to tell you AI investment is risk-free. There are two scenarios that could cause significant pain – and one of them is already playing out.
Scenario 1: Fab Constraints
Think of it like trying to build more cars when there’s only one factory making engines. At some point, that factory becomes the bottleneck.
Look at TSMC’s trajectory. AI chips have gone from just 2% of total revenue in 2022 to about 31% in Q3 2025 – that’s $10.16 billion in a single quarter, growing 2.7x year-over-year. The HPC segment (which includes AI) now accounts for 57-60% of total revenue, with AI accelerators representing about 54% of that HPC segment.

This is the real constraint: physical manufacturing capacity. Look at what’s happening with RAM:
- Memory prices grew 88% from 2023 lows
- HBM (High Bandwidth Memory) costs about 5x more per GB than standard DDR5
- Each GB of HBM consumes approximately 3x the wafer capacity of DDR5
- Suppliers are locked through 2026; no sign of shortage easing before 2027
- Consumer PC and electronics are being squeezed out as wafer capacity shifts to AI
And just this week, we got confirmation of how serious these constraints are: Nvidia announced its largest acquisition ever – approximately $20 billion for Groq’s assets. Groq specialized in inference chips that utilise SRAM. When the dominant player pays nearly 3x what a company was valued at just months prior, it signals supply constraints are real and getting worse.
That was not an issue for fabless Nvidia. It was not an issue for TSMC when AI was a small portion of their bussiness. Now, it is going to be a problem. The physical reality of semiconductor manufacturing will become a bottleneck. This will show up as very high input prices. We’re seeing it in RAM already. This creates downside pressure and corrective cycles – essentially short AI crashes as costs spike.
As I’ve written before, fabs hold the truth. The companies that actually manufacture chips have leverage that chip designers can’t match.
Scenario 2: Funding Crisis
Right now, building AI infrastructure and frontier models requires faith. Companies like OpenAI and Anthropic are burning money at extraordinary rates – billions per year. Without continued investor confidence, building ahead of demand isn’t possible.
Here’s the chain of logic that makes this fragile:
- These companies don’t generate enough revenue to fund their infrastructure needs
- They depend on continuous capital raises at ever-higher valuations
- Those valuations are based on future expectations, not current fundamentals
- A stock market crash or economic downturn destroys the wealth effect that enables these raises
- Without new capital, they can’t build ahead of demand
- They’re forced to cut spending, slow training, or raise prices to survive
- The whole industry decelerates
Anthropic just raised at a $183 billion valuation. That’s predicated on hitting $20-26 billion in revenue by 2026 and $70 billion by 2028. Any serious economic disruption puts those projections – and that valuation – at risk.
This is a sentiment-driven market. Faith matters. And faith can evaporate quickly.
Why Neither Risk Is Terminal
Here’s my view on both scenarios: they’re short-term downcycles that won’t change the overall trajectory. The underlying demand is real. The technology works. The use cases exist.
A fab constraint creates a temporary bottleneck. A funding crisis slows the pace of development. Neither makes the technology worthless or eliminates demand. We’re talking about 1-2 year disruptions, not the end of the story.
Investment Implications
So where does this leave an investor?
The biggest returns of this mega cycle will come from a few companies we don’t know yet. That’s how technology cycles work – the big winners often emerge as the wave builds.
But the most predictable returns come from established semiconductor companies. And there’s a pattern in the supply chain:
Lower in the chain = Safer but more modest returns
- Real fabs: Intel, Samsung, TSMC, Micron
- These have physical assets, durable competitive advantages (moats), and will benefit regardless of which AI company wins
Higher in the chain = Bigger returns but more risk
- Chip designers, AI companies, application builders
- More unpredictable, potentially much larger gains (or losses)
The safer you want to be, the lower you go in the supply chain. The more risk you can stomach, the higher you go.
The Honest Assessment
I’m not arguing that no bubble dynamics exist in AI. Some investments are overpriced. Some companies will fail. Some money will be lost.
What I’m arguing is:
- The scale of AI investment is smaller than it appears when put in perspective
- The public dramatically underestimates current capabilities
- The pattern follows internet expansion – software first, then everywhere
- Real revenue and real use cases exist – this isn’t pure speculation
- Infrastructure spending proves sophisticated buyers see real demand
- AI will expand GDP, not just compete for existing share
- Software production costs falling creates a virtuous cycle
- The Jevons Paradox means efficiency drives more spending, not less
- The risks are real but likely temporal, not terminal
Could I be wrong? Of course. These are complex systems and prediction is hard. The timing could be off. The downcycles could be deeper than I expect.
But if you’ve quickly made up your mind that AI is “obviously a bubble” and we’ll see a bust in 2026, I hope these 9 points give you something to think about. The skeptics might be right about short-term volatility. They might be wrong about the long-term direction.
If you’re convinced by this argument, consider starting with semiconductor fabs like TSMC, Samsung, or Micron – they’re the safest way to play the buildout. If you’re still uncertain, keep watching the revenue numbers from Anthropic and OpenAI. If growth continues at these rates, the market will eventually have to acknowledge the demand is real.
I’d rather own some semiconductors through the ups and downs than miss one of the defining technology buildouts of our time.
*Disclaimer: This article represents my personal analysis and opinions. It is not financial advice. I own TSMC, Samsung, Intel, Micron, ARM, Qualcomm, ASML. Always do your own research before making investment decisions.*
Discover more from FinAI
Subscribe to get the latest posts sent to your email.
Hectir
When AI replaces existing jobs and make people lose their jobs, the same people will take their money out of the market to survive. I wonder how these giant companies afford these billions at that time.
The economy depends on people on payroll and their consumption and investment patterns.