The AI “Supercycle”
AI Supercycle refers to a multi-decade technological and economic cycle in which artificial intelligence drives sustained, large-scale investment, innovation, and value creation across the entire technology stack—spanning energy, semiconductors (chips), infrastructure (cloud and data centers), foundation models, and applications.
It is characterized by massive capital expenditure (capex) in compute and infrastructure, rapid advancements in model capabilities, and the gradual shift of economic value from foundational layers (e.g., GPUs and cloud) toward higher-margin application and software layers—similar to prior supercycles like the internet, mobile, and cloud.
At its core, the AI supercycle represents a structural transformation of how software is built, distributed, and monetized, where intelligence becomes a programmable, scalable resource embedded across industries, workflows, and consumer experiences.
The biggest question in artificial intelligence right now is not whether AI is important. That part is obvious. The real question is much harder: where is the money in AI actually accruing?
AI is in a full-stack economic supercycle — one that touches semiconductors, data centers, cloud infrastructure, foundation models, inference, applications, agents, consumer software, enterprise software, and energy.
The central observation is simple: the AI ecosystem does not yet look like prior technology supercycles.
In the internet, mobile, and cloud eras, the long-term value eventually moved upward into software and applications. Software businesses became extraordinarily valuable because they had near-zero marginal costs. Build once, distribute globally, enjoy 80% to 90% gross margins. That was the classic SaaS and cloud software model.
AI is different.
Every incremental AI user consumes real compute. Every prompt burns GPU cycles. Every inference request has a cost. That changes the economic structure of the entire industry.
Right now, the AI value stack looks like an inverted triangle. The largest and most profitable value is concentrated at the bottom: semiconductors, GPUs, data centers, power, memory, networking, and infrastructure. The application layer is growing fast, but it is still relatively small and often much less profitable.
That is why Nvidia has become the defining company of the AI era so far. Its data center business captures enormous demand from hyperscalers, model labs, AI startups, and enterprises. Nvidia’s gross margins are far higher than most AI application companies because it sits at the scarce, bottlenecked layer of the stack.
Meanwhile, many AI application companies may be growing revenue rapidly but still face hard gross margin questions. Unlike traditional software, AI applications are not free to serve. The marginal user is expensive. That is one reason several large-scale AI businesses can reach billions in revenue while still having uncertain profitability.
This creates the core AI investing question: when does the triangle flip?
In cloud computing, it took many years for infrastructure investment to translate into massive software value creation. AWS began its journey in 2006-2007, landed major customers like Netflix years later, and eventually became one of the most important profit engines in technology. That transition took roughly a decade.
AI may take as long — or longer.
The reason is that the substrate is harder. AI needs GPUs, power, data centers, memory bandwidth, networking, model training, inference optimization, and constant capital investment. This is not just software distribution. It is industrial-scale computing.
One major debate is whether the current AI capex boom is simply building capacity for future application revenue. The optimistic view is that today’s infrastructure buildout is like laying railroads. The tracks have to be built before the economy can form around them. The skeptical view is that hyperscalers may overbuild if application revenue and profitability do not catch up fast enough.
That makes hyperscaler capex guidance one of the most important signals in AI. Microsoft, Google, Amazon, Meta, and others are effectively telling the market how much conviction they have in future AI demand. If those numbers continue rising, the buildout continues. If they slow sharply, it may signal that the current equilibrium is under pressure.
Another major theme is the split between training and inference. Training frontier models is capital-intensive but relatively predictable. Inference is different. It is bursty, user-driven, and tied to real-world usage. As AI moves from demos to daily workflows, inference should become a larger share of compute demand. That shift matters because inference economics will determine whether AI apps can become durable, profitable businesses.
It also raises a critical question about consumer AI: can ChatGPT, Gemini, Claude, and similar products become as large as Google Search, YouTube, WhatsApp, Instagram, or TikTok?
ChatGPT has already reached massive scale, but scale alone is not enough. The key questions are monetization and frequency. Google and Meta monetize billions of users through ads at high annual revenue per user. AI apps currently monetize far less per user, and many users are still free. Subscription revenue is meaningful, but it may not be enough to support the full economics of consumer AI at global scale.
That points to a likely future debate: will AI eventually become an advertising business?
Today, that feels uncomfortable. People do not want ads interrupting a personal AI conversation. But the same skepticism existed during the Facebook mobile transition. Critics argued that ads would not work on phones because screens were too small. They were wrong. The ad model adapted.
AI may produce a new kind of advertising model built around intent, context, trust, and attribution. If a user asks an AI assistant for help choosing software, booking travel, buying insurance, selecting a school, or planning a purchase, the commercial intent is extremely high.
If platforms can insert monetization without destroying user trust, advertising could become one of the biggest unlocks in AI economics.
The enterprise AI market has its own questions. Incumbents like Salesforce, Palantir, Microsoft, Adobe, ServiceNow, and others are adding AI features into existing platforms. These companies may not always show up cleanly as “AI application revenue,” but their AI usage flows through model providers, cloud infrastructure, and inference spend. The AI transformation of incumbents may therefore be partly hidden inside existing software budgets.
The most competitive layer appears to be the middle of the stack: inference platforms, AI infrastructure startups, model serving, orchestration, optimization, and developer tooling. This layer has many promising startups, but it also faces existential pressure from hyperscalers. The key question for each company is: are you a feature or a platform?
If a capability naturally belongs inside AWS, Azure, Google Cloud, OpenAI, Anthropic, or Nvidia, it may be difficult to build a standalone company around it. But if it becomes a control point across models, clouds, workloads, and applications, it may become a durable platform.
The most important takeaway from this is that AI should be analyzed as a full-stack economic system, not as a collection of exciting apps. The right questions are not just “what can this model do?” or “what startup is growing fast?” The better questions are:
Where does value accrue?
Who has pricing power?
Which layer has scarcity?
Which businesses have durable gross margins?
Which costs decline with scale, and which costs increase with usage?
Which companies are platforms, and which are features?
AI is not a fad. But the economics are not settled. The infrastructure layer is winning now. The application layer is growing quickly but still has to prove profitability. Consumer AI needs a stronger monetization engine. Enterprise AI must show measurable productivity gains. Inference needs to become cheaper and more efficient. And the entire ecosystem has to determine whether this inverted triangle eventually flips.
That is where the money in AI will be decided.










