Speed without learning is just impatience that’s useless

Sometimes speed is just impatience wearing ambition’s clothes.

Silicon Valley has always been brutal about the future.

Nobody cares what you built ten years ago.
Nobody gives you permanent credit because you took a company public.
If you stop building, you disappear.

That’s actually one of the best things about the Valley.

But lately, I think speed has curdled into something else.

Speed with learning beats impatience

Researchers jump labs every eighteen months.
Founders shut down companies after a year because the excitement faded.
Engineers treat careers like a video game: collect title, vest cliff, move on.

Some speed is good.

Good speed compresses learning.
It helps you kill stagnant ideas.
It gets you from signal to iteration before the market catches up.

Bad speed is different.

Bad speed is novelty addiction.
It keeps you from sitting in the ugly middle where the real edge forms.

AI makes this more dangerous.

You can prototype faster.
Test faster.
Ship faster.
And convince yourself faster that motion equals progress.

But lower friction can also accelerate false starts.

In a world where building is cheap, choosing what is worth building matters more.

Depth still wins.

Judgment compounds.
Relationships compound.
Domain expertise compounds.
Trust with a team compounds.

Cut the timeline short and you never reach the part of the curve where the returns become extraordinary.

The future belongs to fast learners.

But also to people with enough discipline to stay.

The Digital AI Stack: Why the AI Economy Is Becoming a Full-Stack Industrial System

Most people still use AI like it is a chatbot.

That framing is already obsolete. AI is not becoming a feature. It is becoming a complete industrial stack, with its own infrastructure layers, operating systems, orchestration frameworks, application ecosystems, and economic gravity.

What cloud computing did to enterprise infrastructure over the last twenty years, AI is about to do to every layer of software, decision making, and digital labor. The mistake many investors, founders, and operators make is viewing AI through only one layer of the stack.

Some focus entirely on models. Some focus entirely on applications. Some obsess over GPUs. Others think agents alone are the future.

In reality, AI is becoming a vertically integrated system where every layer compounds the value of the layer above it. That is what makes this moment structurally important.

The modern AI economy is not one market. It is an interconnected stack.

Layer 1: The Digital AI Chip

At the bottom of the stack sits compute.

This is the industrial foundation powering everything above it. Without massive parallel computation, none of the modern AI ecosystem exists.

For years, most people thought CPUs were the center of computing. AI changed that assumption completely. Training and inference workloads massively reward parallelism, memory bandwidth, and specialized tensor operations. That shifted power toward GPUs, TPUs, LPUs, and custom accelerators. That’s Cerebras.

This is why companies like NVIDIA became some of the most strategically important companies in the world. The market is not simply paying for chips. It is paying for the ability to manufacture intelligence at scale. And increasingly, memory architecture matters just as much as raw compute.

HBM3e, SRAM optimization, interconnect bandwidth, power efficiency, and distributed cluster orchestration are becoming critical competitive layers because modern AI systems are fundamentally constrained by data movement. That’s Micron, SK Hynix and Sandisk (to a lesser extent)

The dirty secret of AI infrastructure is that intelligence is often bottlenecked not by reasoning capability, but by how fast systems can move memory through the stack. This is also why the AI race is becoming geopolitical. Countries are realizing that compute capacity is not merely technology infrastructure. It is economic leverage.

The cloud providers understood this early.

Amazon AWS, Microsoft Azure and Google Cloud are intelligence utilities.

Layer 2: Symbolic Systems

Above the hardware sits symbolic representation. AI models are not magical. They are prediction systems trained on symbolic abstractions of reality.

Text. Code. Images. Audio. Relationships. Logic.

The internet became the largest symbolic dataset ever created by humanity.

Every GitHub repository, Wikipedia article, Stack Overflow answer, scientific paper, Reddit discussion, legal contract, customer support transcript, and YouTube subtitle became training material for machine intelligence. This layer matters because intelligence requires abstraction.

A model cannot reason about concepts unless reality has first been translated into symbolic structures.

This is why data quality matters so much.

Garbage symbolic systems create garbage reasoning systems.

It also explains why enterprises are suddenly obsessed with knowledge graphs, embeddings, retrieval systems, vector databases, and internal context pipelines. The next wave of enterprise AI will not merely be powered by public internet knowledge. It will be powered by proprietary symbolic context unique to each organization. The institutional memory of a company becomes machine-readable. That changes the economics of expertise.

Layer 3: Digital AI Models

The model layer is where symbolic understanding becomes usable intelligence.

Large language models fundamentally changed software because they introduced probabilistic reasoning into mainstream computing. Traditional software systems required deterministic instructions.
AI models infer. That sounds subtle. It is not.

It changes the philosophy of software entirely.

Instead of explicitly programming every workflow, humans increasingly specify intent while models generate outputs dynamically. This is why concepts like chain-of-thought reasoning, multimodal understanding, autoregressive generation, and state-space modeling matter. The software industry is shifting from handcrafted logic toward statistical cognition.

But the important observation is this:

Models alone are rapidly commoditizing. The frontier models remain expensive and strategically important, but raw intelligence increasingly behaves like infrastructure. This is similar to what happened with cloud computing.

At first, infrastructure itself captured extraordinary value.
Over time, differentiation migrated upward into orchestration, workflow design, distribution, and vertical integration. The same thing is happening to models. OpenAI and Anthropic (and Xai to a small extent and Meta) are here.

Which leads to the next layer.

Layer 4: Agent Systems

This is where the stack becomes operational.

Models generate intelligence. Agents translate intelligence into action.

That distinction matters enormously.

A chatbot answering questions is not transformative. An agent autonomously orchestrating software systems is.

Agents introduce:

Tool usage. Workflow execution. Memory. API interaction. Reasoning loops. Task decomposition. Context retrieval. Software emulation.

This is the layer where AI stops being conversational and starts becoming operational. Most people underestimate how large this market could become. Because the economic value is not in generating text.

The value is in reducing human coordination overhead.

Most enterprise inefficiency is not caused by lack of intelligence.
It is caused by fragmented systems, manual orchestration, context switching, poor prioritization, and operational latency. Agents attack those problems directly. This is why companies are racing toward AI-native workflow systems.

The future enterprise software stack will increasingly consist of:

Systems of record. Systems of intelligence. Systems of execution.

And the agent layer becomes the orchestration fabric connecting all three. That is why every major software category is being rebuilt around agents:

Sales. Legal. Healthcare. Security. Finance. Engineering. Research. Customer support. Operations.

The software interface itself begins disappearing. The workflow becomes the product.

Layer 5: Digital AI Applications

At the top of the stack sit the visible applications.

This is the layer consumers interact with directly.

Coding copilots. AI legal assistants. Research systems. Creative generation tools. Scientific discovery engines. Enterprise copilots. Autonomous analysts.

But the application layer is deceptive. Because most AI applications are not truly standalone products. They are abstractions sitting on top of every layer beneath them. A modern AI application increasingly depends on:

Compute infrastructure. Memory systems. Foundation models. Context orchestration. Agent frameworks. Enterprise integrations. Feedback systems. Safety layers.

This is why the strongest AI companies are increasingly becoming vertically integrated.

Owning only the application layer becomes dangerous if someone else controls the models. Owning only the models becomes dangerous if someone else owns distribution. Owning only the infrastructure becomes dangerous if higher-level orchestration captures most of the economic value.

Every layer is attempting to move both upward and downward in the stack. That is the real AI war.

The Bigger Shift

The most important takeaway from this stack is that AI is not simply another software cycle.

It is the industrialization of cognition.

Previous software revolutions primarily automated storage, communication, and workflow digitization.

AI automates parts of reasoning itself. That changes the economics of labor. It changes organizational design. It changes how products are built. It changes how companies scale. It changes what knowledge work even means.

And unlike previous technology waves, every layer of this stack reinforces the others.

Better chips enable larger models. Better models enable more capable agents. Better agents enable richer applications. Better applications generate more data. More data improves symbolic understanding.

The system compounds. That is why this moment feels different. We are not merely watching a software boom. We are watching the construction of a new digital industrial stack for intelligence itself.

Why SAP and Salesforce won’t die because of AI

AI and LLMS won’t kill Salesforce or most other SaaS.

First: Salesforce was never “just” the UI. The UI was the enforcement mechanism for structured data capture. That is different. The real moat was always organizational standardization. Salesforce won because entire GTM organizations reorganized themselves around its ontology: Accounts, Opportunities, Stages, Forecasts, Territories, Approvals. The UI enforced compliance with that ontology. Agents weaken the interface advantage, but they do not automatically weaken the organizational ontology advantage. That distinction matters.

Second: “headless” is being overstated across the industry. Most enterprise software already exposed APIs years ago. What is changing is not technical architecture. What is changing is who the primary consumer of the software is. Historically: humans. Increasingly: agents. That is a distribution and interaction shift more than a pure infrastructure shift. Salesforce is effectively repositioning itself from “application employees log into” toward “operational substrate agents execute against.” That is a very different narrative than “Salesforce became headless.”

Third: the deepest moat may not actually be the database layer. It may be the policy layer.

That is the missing insight in most “Postgres + APIs replaces SaaS” arguments.

A surprising amount of SaaS value is not CRUD operations. It is encoded institutional policy:

  • who can approve what
  • escalation trees
  • exception handling
  • rollback rules
  • auditability
  • compliance workflows
  • reconciliation logic
  • permissions
  • temporal sequencing
  • coordination between departments

The database is easy. The operational state machine is not.

AI lowers the cost of recreating the first 80% of a system of record. The remaining 20% is the moat.

The first 80% is schema generation and workflow reconstruction. The final 20% is operational entropy accumulated over 15 years of edge cases.

That is where incumbents still retain leverage.

The agentic world may compress the distinction between system of record and system of execution.

Historically:

  • CRM stored intent
  • ERP stored transactions
  • Ticketing stored requests
  • Humans executed work

In the agentic stack:

  • the system stores context
  • agents execute
  • outcomes become new training/context
  • execution exhaust becomes proprietary data

That closed-loop matters enormously.

The future moat is not:
“Who stores the record?”

The future moat is:
“Who observes the action loop?”

That changes everything.

A vertical AI-native field-service platform that:

  • dispatches technicians
  • observes outcomes
  • tracks timing
  • handles exceptions
  • coordinates payments
  • monitors resolution quality
  • captures edge-case execution traces

…is building a fundamentally stronger moat than a passive CRM.

Because it generates new proprietary operational data every day.

That is the key transition:
Static data → Dynamic execution exhaust.

Another strong insight: network effects historically being weak in SoRs.

Most enterprise SaaS never achieved true network effects. It achieved switching-cost lock-in masquerading as network effects.

Salesforce did not become more valuable because more companies used Salesforce.
It became harder to leave because:

  • integrations accumulated
  • process accumulated
  • training accumulated
  • reporting accumulated
  • management rituals accumulated

That is not a network effect.
That is organizational sediment.

Agentic systems may genuinely create network effects for the first time because agents transact across organizational boundaries.

That is a major conceptual shift.

For example:

  • procurement agents
  • logistics agents
  • insurer/provider agents
  • auditor/accounting agents
  • supplier/manufacturer agents

Once multiple counterparties coordinate through the same execution rails, the software stops being a database and starts becoming market infrastructure.

That is closer to Stripe, Visa, Flexport, DoorDash, or even Bloomberg than traditional SaaS.

DoorDash is actually more profound than it initially appears.

DoorDash’s moat is not “restaurant data.”
It is coordinated operational execution:

  • drivers
  • routing
  • logistics
  • dispatch
  • fulfillment timing
  • marketplace balancing
  • payments
  • exception recovery

The future durable enterprise platforms likely look more like operational coordination networks than dashboards.

Another thing:

The SaaS seat-license model structurally weakens in an agentic world.

If:

  • one agent replaces 20 UI users
  • workflows become automated
  • human interaction frequency declines

…then charging “per employee seat” becomes increasingly artificial.

This is existential for large enterprise SaaS vendors.

The pricing model likely shifts toward:

  • workflow volume
  • execution outcomes
  • API consumption
  • agent transactions
  • orchestration complexity
  • governance/compliance layers
  • coordination rails

That transition is potentially more dangerous to incumbents than the technical transition itself.

Because their valuation multiples were built on predictable seat expansion economics.

The most important sentence in the entire piece may actually be this:

“The data is the context now.”

Yes. But even that understates it.

In agentic systems:
context is memory,
memory drives action,
action generates exhaust,
exhaust becomes proprietary context.

This creates recursive compounding loops.

The strongest AI-native systems will not merely store records.
They will:

  • observe workflows
  • model workflows
  • predict workflows
  • execute workflows
  • optimize workflows
  • benchmark workflows

That is qualitatively different from SaaS.

One final pushback.

You slightly overestimate the speed at which enterprises will abandon incumbents.

Theoretically:
“Postgres + APIs + agents” sounds compelling.

Operationally:
most enterprises cannot even maintain clean Salesforce objects consistently.

The average enterprise is nowhere near capable of safely operating:

  • custom ontologies
  • agent orchestration
  • policy engines
  • exception handling systems
  • governance layers
  • evaluation pipelines
  • permission frameworks
  • audit systems

Especially outside elite tech companies.

So the near-term likely looks less like:
“Incumbents disappear.”

And more like:
“Incumbents become increasingly infrastructural.”

Salesforce may become:

  • the policy layer
  • the permission layer
  • the audit layer
  • the canonical customer graph

…while newer AI-native systems increasingly own execution and workflow intelligence above it.

Which means the interesting startups may not fully replace incumbents initially.

They may parasitize them first.

That is historically how platform transitions usually happen.

Overall, the core thesis is strong:
defensibility moves away from UI habit and toward:

  • execution loops
  • operational context
  • policy orchestration
  • proprietary exhaust
  • multi-party coordination
  • real-world execution
  • trust infrastructure

That is the real shift.

And the most important hidden implication is this:

The winners of the agentic era may look less like SaaS companies and more like operational networks.  

AI Is Killing Product Moats. The New Moat Is Organizational Design

What AI is actually doing to companies beneath the product layer is not obvious to most entrepreneurs. It is not really about recruiting. It is about institutional design becoming the new competitive advantage once software itself becomes fluid.

The core argument is simple:

When models, workflows, interfaces, and even product categories converge fast enough, the durable moat shifts away from “what you build” toward “how your organization compounds judgment.”

In other words:

AI compresses product differentiation.


So organizational differentiation becomes strategic infrastructure.

The second important observation most people miss:

Great companies are not just talent aggregators.


They are identity engines.

Organizational Design

Like Amazon leadership principles. Or Netflix Culture Code.

The company shape determines:
who gets power,
what behavior is high status,
what sacrifice means,
what ambition gets rewarded,
and ultimately what kind of human being can exist there.

That is a much more sophisticated framing than the usual “mission-driven culture” nonsense.

  1. AI is commoditizing product narratives faster than products themselves.

Everyone now sounds identical:
“system of action”
“AI-native workflow layer”
“context graph”
“organizational memory”
“agentic infrastructure”

The language converges before the products converge. Which means narrative inflation is now instantaneous.

  1. The new moat is institutional compression resistance.

The companies that survive AI will not necessarily have the best models.


They will have the hardest-to-replicate organizational geometry:
decision velocity,
talent density,
status systems,
deployment loops,
and concentrated judgment.

  1. Most companies accidentally optimize for emotional extraction.

They make people feel:
special,
chosen,
important,
close to power.

But structurally:
they centralize authority,
delay ownership,
gate economics,
and defer recognition indefinitely.

That asymmetry is probably the defining labor tension of AI-era startups.

  1. AI companies are increasingly religions disguised as corporations.

Not metaphorically.
Structurally.

The strongest AI institutions now compete using:
destiny,
civilizational stakes,
tribal identity,
moral positioning,
and historical proximity.

The recruiting pitch increasingly resembles ideological alignment rather than employment.

  1. The collapse of category boundaries means “org design” becomes product design.

Palantir’s forward deployment model was not HR policy.
It was the product architecture expressed through people.

OpenAI’s structure is not separate from its models.
The institution itself is part of the product.

That distinction matters enormously.

The biggest “aha” people may miss:

Most founders still think organizational design is downstream of the company succeeding.

In AI, it is upstream.

Because the product itself changes too quickly.

Your structure determines:
who joins,
who stays,
who gets authority,
how fast decisions travel,
whether reality reaches leadership,
whether customer pain is respected,
whether exceptional people compound each other or suffocate each other.

The organization is no longer the wrapper around the moat.

The organization is the moat.

LinkedIn post:

Every AI company now sounds the same.

“Agentic workflows.”
“System of action.”
“Context graph.”
“Organizational memory.”
“AI transformation platform.”

A new category gets invented on Monday.
By Friday, 400 startups have rewritten their homepage around it.

That is what happens when product velocity becomes cheap.

Models improve fast.
Interfaces converge.
Features get copied in weeks.
Entire categories collapse into each other.

The visible layer of company-building is becoming commoditized.

Which means the real moat is moving somewhere else.

Into the institution itself.

The companies that survive this era will not just have better products.

They will have better organizational geometry.

How decisions move.
Who gets authority.
What behavior is high status.
How customer reality reaches product teams.
How exceptional people compound each other instead of drowning in process.

Palantir understood this early.

Forward deployment was not just a GTM motion.
It was an organizational invention.

OpenAI did not just build models.
It built a new institutional structure around frontier research.

The shape of the company became part of the advantage.

Most founders still think org design is something you clean up after success.

In AI, it is becoming the thing that determines whether success compounds at all.

Because products are getting easier to copy.

But concentrated judgment,
talent density,
mission alignment,
and institutional trust
are still brutally hard to build.

The organization is no longer the wrapper around the moat.

The organization is the moat.

What AI is doing is exposing where companies were already structurally broken.

Seeing today’s latest round of “AI layoffs” discourse reminded me of something uncomfortable that I think a lot of people in tech already know, but are hesitant to say out loud.

These layoffs are not really about AI replacing workers directly.

But they are still because of AI.

That distinction matters.

A lot of ink has been spilled debating whether the current layoff wave is genuinely driven by AI productivity, or whether companies are simply “AI-washing” ordinary cost cutting. You can find essays arguing both sides. One side claims AI is fundamentally transforming software development and knowledge work.

The other points out that if AI is truly delivering 5x productivity gains, why do products look mostly the same? Why are revenues not exploding upward? Why do organizations still move slowly?

Both sides are partially right.

Because the real story is not that AI suddenly replaced 30% of employees overnight. It clearly has not. Most companies are nowhere close to operating autonomously through AI agents. Most workflows are still deeply human, deeply organizational, and deeply messy.

But something equally important did happen. AI changed the economics and speed of execution faster than organizations could adapt to it.

And that mismatch is now destabilizing companies from the inside.

The easiest way to understand this is through a boring but useful framework every management consultant eventually rediscovers and puts into a PowerPoint slide:

Input. Output. Outcome. Code is input. Features are output. Revenue, retention, usage, customer satisfaction — those are outcomes.

For years, engineering organizations operated under a very important constraint: writing software was expensive and slow.

The CEO had 150 ideas. Product wanted to test all of them. Engineering said: “We only have bandwidth for one.”

That constraint was frustrating, but healthy. Because it forced prioritization. It forced argument. It forced teams to kill bad ideas before they consumed too many resources.

Then generative AI arrived.

Now suddenly:

  • MVPs appear in days.
  • Internal tools get built overnight.
  • Pull requests explode.
  • Individual engineers generate dramatically more code.
  • Teams can prototype five directions simultaneously instead of debating one.

At first this feels magical. And in many ways, it is. But then something strange happens. The amount of code increases dramatically. The amount of actual business outcome often does not.

And this is where both AI evangelists and AI skeptics start talking past each other.

The evangelists point at the explosion in output. They are not hallucinating that. It is real. Anyone working inside a modern tech company can see it. AI usage is everywhere now. Even conservative companies that try to restrict AI adoption still have employees quietly using ChatGPT, Gemini, Claude, Cursor, Copilot, or some internal equivalent.

Meanwhile, forward-leaning companies are consuming AI tokens at astonishing rates. Entire engineering teams now operate with AI copilots open all day. Code generation volume has exploded. Internal tooling velocity has exploded. Experiments that once took weeks now take hours.

The skeptics then ask the obvious question:

“If all this productivity is real, where are the corresponding outcomes?”

  • Why are the products not radically better?
  • Why are revenues not scaling proportionally?
  • Why do users barely notice the difference?

That is the correct question.

Because AI dramatically accelerated inputs without automatically improving judgment.

And once that happened, organizations discovered something uncomfortable:

Coding was never the only bottleneck. In many companies, it was not even the primary bottleneck anymore. Alignment was. Decision-making was. Prioritization was. Organizational coherence was. The ability to distinguish two good ideas from eight bad ones was.

For years, engineering scarcity hid these problems.

When code was expensive, organizations were forced to compress decision-making. They had to debate priorities carefully because implementation carried real cost. Bad ideas died early because nobody wanted to waste six months building them.

Now implementation is cheap.

So the filtering mechanisms weaken.

Instead of debating whether something should exist, teams often just build it because they can.

And this creates a second-order organizational problem that I think many executives are only now beginning to realize.

AI does not just accelerate execution.

It accelerates divergence.

Two teams receive loosely aligned objectives and independently build different solutions overnight based on different assumptions. Product alignment that once happened before implementation now happens after implementation — when conflicting prototypes already exist.

Except nobody really wants to slow down and align properly anymore.

Because once people get used to infinite AI-assisted execution capacity, every disagreement feels solvable through “just building another version.”

So instead of reducing organizational chaos, AI often amplifies it.

Everyone becomes locally productive while the organization becomes globally incoherent.

This is the part most discussions about AI productivity completely miss.

The bottleneck shifted.

We thought software engineering was the limiting factor.

Turns out organizational coordination was.

And when the original bottleneck disappears overnight, all the hidden inefficiencies downstream become painfully visible.

You suddenly notice:

  • overlapping roadmaps,
  • duplicate tooling,
  • stakeholder conflicts,
  • management layers,
  • endless meetings,
  • redundant approvals,
  • teams blocking teams,
  • political coordination disguised as process.

The faster coding becomes, the more expensive organizational friction becomes.

That is one side of the story.

The other side is simpler.

AI is expensive.

Not philosophically expensive. Literally expensive.

If your engineers are consuming tens of thousands of dollars per year in AI tokens, inference, compute, and tooling costs, that spend has to come from somewhere.

And right now, most companies are not seeing proportional revenue expansion from that spend.

That matters.

Because businesses ultimately run on unit economics, not technological excitement.

If your input costs rise 30%, but your outcomes barely move, something must rebalance.

Companies can tolerate this temporarily while chasing strategic advantage. But eventually the finance organization starts asking obvious questions:

  • Why is infrastructure spend exploding?
  • Why is AI tooling spend exploding?
  • Why are engineering costs rising?
  • Why are we generating dramatically more output without equivalent business results?

And once those questions start getting asked seriously, layoffs become mathematically predictable.

Not because AI replaced the employee one-for-one. But because AI changed the company’s cost structure before it changed the company’s outcomes.

This is why calling these layoffs “AI-washing” is incomplete. Yes, many companies already had structural issues:

  • overhiring,
  • bloated middle management,
  • slowing growth,
  • declining margins,
  • weak product differentiation,
  • post-ZIRP expansion hangovers.

All true.

But AI still matters profoundly here because it changed executive expectations around what level of organizational efficiency should now be possible.

And there is another uncomfortable truth underneath all of this. Large organizations contain slack by design.

That redundancy is not accidental. It creates resilience. It allows institutional continuity. It allows people to leave, switch teams, take parental leave, or disappear without the company collapsing instantly.

But that same redundancy also means many large companies can remove 10–20% of staff and continue functioning in the short term with surprisingly little disruption.

Sometimes they even move faster temporarily.

Fewer stakeholders. Fewer alignment loops. Fewer competing priorities. Fewer organizational veto points.

Again, this does not mean layoffs are universally good or strategically wise long term. Many companies will absolutely cut too deep and damage themselves. Some are already doing so.

But from the perspective of executives staring at exploding AI budgets and organizational coordination problems, the logic behind these layoffs is not irrational.

They are trying to rebalance the organization around a new execution reality. Whether they actually know how to operate effectively in that new reality is a separate question.

And that, ultimately, is the real story.

The biggest misconception in the current AI debate is the assumption that AI productivity automatically translates into business productivity.

It does not. AI massively increases the capacity to generate inputs. But companies still need systems that convert inputs into outcomes.

They still need judgment. They still need prioritization. They still need coherent product strategy. They still need organizational alignment. They still need actual understanding of customer problems.

And until companies learn how to translate AI-generated abundance into real economic outcomes, we are going to continue seeing this strange phase where:

  • code generation explodes,
  • AI spending explodes,
  • organizations destabilize,
  • and payroll shrinks to offset the difference.

These layoffs are not happening because AI has fully replaced humans.

They are happening because AI changed the economics, expectations, and operating tempo of companies faster than companies learned how to adapt.

AI accelerated execution. Human organizations did not accelerate coordination at the same speed. And that gap is where the layoffs are coming from.

Why did OpenAI and Anthropic announce services investments to implement AI with PE funds?

The famous “$1 software, $6 services” idea has been floating around Silicon Valley for years. The premise is simple: for every dollar spent on software, companies spend roughly six dollars implementing, operating, customizing, integrating, managing, and extracting value from that software through services.

What is interesting now is that AI companies are all implicitly claiming they can capture the entire $7.

Not just the software dollar.
All of it.

The software.
The implementation.
The execution.
The operations.
The optimization.
The services layer.
Everything.

And honestly, I think a lot of founders believe this is inevitable because AI dramatically compresses labor costs. If a system can generate code, generate creative, generate campaigns, generate reports, generate analysis, then the assumption becomes: “obviously the software company now captures the services revenue too.”

But I think most people are massively underestimating how difficult that transition actually is.

Because replacing labor is not the same thing as owning outcomes.

Most agencies right now are making the exact same mistake. They are using AI primarily to reduce labor cost inside the existing agency model. Faster reports. Faster creative generation. Faster SEO content. Faster media analysis. Faster onboarding. Faster decks. Faster summaries.

That is not a new operating model. That is the old operating model with better tooling.

You still fundamentally have the same fragile structure underneath:
humans moving information between systems,
humans manually coordinating workflows,
humans stitching together context,
humans translating between strategy and execution,
humans constantly re-explaining the same things to different people.

The result is that many agencies are becoming “cheaper execution engines” instead of becoming true growth infrastructure companies.

And I think that is the wrong game entirely.

The real opportunity is not selling cheaper labor. The real opportunity is selling managed growth loops.

That distinction matters because agencies historically sold hours, then deliverables, then expertise. But none of those things are actually what the customer wanted. The customer wanted movement. Specifically, movement from uncertainty to revenue.

Nobody buys SEO because they emotionally crave backlinks.
Nobody buys paid media because they love campaign structures.
Nobody buys content because they admire blog formatting.

They buy these things because they are trying to move a buyer through a chain:
unaware → aware → interested → trusting → wanting → acting.

Every marketing function exists to move somebody forward in that chain. If it does not move somebody forward in that chain, it is probably just organizational theater disguised as marketing sophistication.

When I reduce marketing down to first principles, most of the work is actually just recombining a relatively stable set of raw materials:
customer pain,
customer language,
proof,
offers,
objections,
competitor claims,
search demand,
social demand,
conversion data,
sales feedback,
brand taste.

That is it.

Most campaigns, landing pages, ads, outbound sequences, webinars, sales decks, and SEO pages are simply different combinations of those inputs expressed through different channels.

The problem is not lack of intelligence. The problem is workflow fragmentation.

Traditional agencies route these inputs through an absurdly inefficient human maze. One person gathers data. Another writes a brief. Another reviews the brief. Another asks the client clarifying questions. Another creates variants. Another formats slides. Another builds reports. Another explains the reports in a meeting nobody wanted.

By the time the work actually ships, the market already moved.

This is why I increasingly think the future agency is fundamentally about loops, not services.

A loop is an end-to-end process that continuously converts inputs into outcomes while learning from feedback. The important part is not automation. The important part is the feedback system.

For example, take a paid creative loop.

Customer pain, proof points, offer positioning, and channel data become creative angles. Those angles become variants. The variants become ads. The ads generate performance data and sales feedback. The winners get scaled. The losers get killed. The learnings get stored. Then the next iteration improves.

That is a loop.

An agent is not the loop.

An agent is simply a worker operating inside the loop.

This distinction is where I think a lot of AI companies are going to fail. Right now everybody is obsessed with building agents. Research agents. SDR agents. Creative agents. Analytics agents. Reporting agents. Coding agents.

But many companies are building giant collections of agents without building the actual operating system that coordinates them.

So you end up with:
many bots,
many automations,
many dashboards,
many copilots,
and still no meaningful compounding advantage.

That is AI theater.

The future agency probably has three layers.

At the top sits the managed loop. That is the actual business outcome layer.

Underneath are specialist agents responsible for execution tasks.

And above everything sits human judgment.

I actually think humans become more valuable in this world, not less valuable. But their role changes significantly.

Humans should own:
taste,
strategy,
prioritization,
tradeoffs,
offer quality,
client trust,
narrative framing,
high-consequence decisions,
what to kill,
what to scale.

Humans should not spend their best cognitive hours resizing creative, formatting slides, manually checking broken links, rebuilding onboarding checklists, or writing generic first drafts for the 900th time.

That work should become infrastructure.

This is why I think codified workflows become incredibly important. At Single Grain and Single Brain, I increasingly think about this through SKILL.md files.

A skill file is basically codified expertise. It defines when a workflow should run, what inputs are required, what tools are involved, what good output looks like, what failure looks like, what requires human approval, and what gets saved back into memory.

Without codified skills, agents improvise.

With codified skills, systems begin repeating the best version of the workflow instead of reinventing it every time.

And that is where compounding starts happening.

Labor resets every month.
Infrastructure compounds.

I think the agencies that survive this transition will own seven major loops.

The raw materials loop continuously collects customer pain, objections, reviews, transcripts, competitor messaging, proof, search demand, and social signals so inputs never go stale.

The creative testing loop continuously generates, tests, kills, scales, and learns from creative velocity.

The demand capture loop watches search, AI recommendations, social discovery, and community conversations to determine where authority should be built.

The conversion loop identifies where momentum dies between impression and revenue.

The client narrative loop converts performance data into clear decisions instead of recurring meetings.

The sales enablement loop continuously feeds sales teams with better proof, better objection handling, and better follow-up assets.

And finally the learning loop captures every successful and failed pattern so the entire organization compounds instead of restarting from zero with every client.

That last loop is the moat.

If an agency serves 100 clients and still starts from scratch every time, that is not a learning organization. That is a staffing company with a better website.

The firms that win in the AI era will not simply “use AI.” Everybody will use AI.

The winners will build systems that learn faster than competitors.

That changes hiring too. The most valuable people in the next generation agency are probably not generic task executors. They are people who can direct systems, judge outputs, recognize weak signals, make decisions under ambiguity, improve workflows, and earn trust.

Some people become dramatically more valuable in this world.

Some roles become brutally exposed.

That is uncomfortable, but pretending otherwise does not change the economics.

The question I keep coming back to is simple:

If AI makes execution dramatically cheaper, what do clients still need agencies for?

I think the answer is:
ownership of the growth loop.

Not isolated tasks.
Not disconnected deliverables.
Not prettier reports.

They need someone to continuously turn messy market signals into reliable customer acquisition while the system keeps getting smarter over time.

That is the business model I think eventually captures the full $7.

Job losses due to AI are overblown

I am an AI maximalist. What that means is that from day one I believe it will create more jobs eventually. I have been consistent on this. Why?

AI collapses the marginal cost of cognition.

When the marginal cost of a core production input collapses, systems reorganize around abundance, not scarcity.

That is what happened with:
energy, transportation, computation, communication, and storage.

Now it is happening to reasoning itself.

The mistake most AI doomers make is treating “today’s jobs” as the unit of analysis.

Historically, that has never been the correct frame. The economy does not preserve jobs. It preserves demand satisfaction.

The tractor did not “save farming jobs.”
Excel did not “protect bookkeepers.”
AWS did not “protect sysadmins.”

Instead: costs collapsed, throughput exploded, adjacent industries emerged, organizational complexity increased, new coordination problems appeared, new abstractions became valuable.

The important historical pattern is not merely “technology creates jobs.” Sometimes it absolutely destroys categories permanently. The important pattern is that productivity increases create systemic expansion.

Labor absorption, however does not happens smoothly. History says it absolutely does not.

The transition periods are brutal.

Agricultural mechanization was not painless. Industrialization was not painless. Globalization was not painless. Software automation was not painless.

AI increases the frontier of economically viable ambition.

Humans are not utility-maximizing relaxation machine.

We are stupid. We find meaning in work.

The modern middle class consumes luxuries kings literally could not buy:

  • instant communication,
  • air travel,
  • climate control,
  • infinite entertainment,
  • personalized medicine,
  • software infrastructure,
  • global commerce.

As cognition becomes cheaper, expectations will inflate again.

The future employee may manage:

  • 20 AI agents,
  • synthetic workflows,
  • autonomous pipelines,
  • real-time market simulations,
  • personalized customer systems.

That is still work. Just different work.

The Intelligence Revolution Will Be Won by Whoever Owns Context

Everyone thinks the AI race is about models.

It is not.

Models are already becoming interchangeable. OpenAI, Anthropic, Google, DeepSeek, Meta, Mistral — the gap narrows every quarter. Intelligence itself is rapidly commoditizing.

The real battle is moving one layer higher.

We are not living through one AI revolution. We are living through four overlapping micro-revolutions:

  • Chat.
  • Agents.
  • Context.
  • Platform.

And the company that wins the Context Revolution will likely become the most valuable company in history. Not because it builds the smartest model. Because it becomes the operating system for digital intelligence itself.

The mistake people make during every technological revolution is assuming the first winners remain the final winners. History says otherwise. The companies that dominate one phase of a revolution are often structurally incapable of dominating the next.

  • IBM did not win the PC revolution.
  • Yahoo did not win search.
  • BlackBerry did not win mobile.
  • MySpace did not win social.
  • VMware did not win cloud.

The current AI cycle will follow the same pattern.

Every Revolution Follows the Same Structure

Technology revolutions always feel chaotic from the inside. But viewed historically, they are surprisingly predictable.

First, fringe technologists experiment for years while nobody pays attention. Then a breakthrough suddenly reduces the cost of creating something valuable. Entrepreneurs rush in. Capital floods the market. Startups multiply. Hype explodes.

Then comes saturation.

The weak die. The winners consolidate. Infrastructure forms. Platforms emerge. The Industrial Revolution followed this pattern. So did the internet, cloud computing, mobile, crypto, and social media.

AI is no different.

What makes this cycle unusual is the scale of what is being automated. Previous revolutions automated transportation, manufacturing, publishing, communication, or commerce. This one automates intelligence itself. That changes everything.

Revolution 1: Chat

The first phase of AI was conversational intelligence.

ChatGPT was the iPhone moment. For the first time, hundreds of millions of people could directly interact with a machine that felt intelligent. That alone created massive companies.

Entire industries emerged around AI-generated text, therapy, tutoring, coding assistance, roleplay, search augmentation, and content production.

This phase belonged overwhelmingly to OpenAI. They won distribution.

ChatGPT became the default interface for consumer AI in the same way Google became the default interface for the web. But chat alone was never the final form. Conversation is useful. Action is more valuable. Which led directly to the second revolution.

Revolution 2: Agents

Once models became reliable enough to produce structured outputs and invoke tools, AI stopped being conversational software. It became executable software.

Agents could suddenly interact with APIs, write code, search databases, send emails, book meetings, analyze files, and manipulate software systems.

This unlocked the agentic explosion.

  • AI SDRs.
  • AI customer support.
  • AI coding agents.
  • AI researchers.
  • AI workflow automation.
  • AI employees.

The entire market shifted from “AI that talks” to “AI that does.” This is where Anthropic built enormous momentum. Claude Code and agent-native workflows pushed the industry toward persistent execution rather than isolated prompts. But we are now clearly in frenzy territory.

Half of YC is building agent startups. Every SaaS product suddenly claims to have “agents.” Launch videos look like crypto commercials from 2021. People are spending more time branding agents than building moats.

That usually means a layer is approaching commoditization. And agents have a serious problem. They are only as good as the context they operate inside.

The Real Bottleneck Is Context

Most AI systems today are stateless. They forget everything. Every prompt starts from near-zero.

Even sophisticated agents still operate with fragmented memory, incomplete understanding, weak personalization, and shallow continuity.

That is the next great infrastructure problem in AI. Context. Not context windows. Context itself. Persistent organizational memory.

The living graph of people, meetings, documents, workflows, decisions, preferences, relationships, histories, and intent.

The companies solving this are not building “better prompts.” They are trying to build intelligence substrates. A durable memory layer for humans and organizations.

This is where the real switching costs emerge. Models are replaceable. Context is not.

A company can switch from GPT to Claude.
It is much harder to switch away from years of accumulated organizational memory, workflows, embeddings, permissions, histories, and relationships.

That is why the Context Revolution matters more than the Chat or Agent revolutions.

Chat created users. Agents created utility. Context creates lock-in.

And whoever owns context becomes the default environment where intelligence operates.

Why Incumbents May Lose

One of the strangest things happening right now is how absent the major model labs appear in the context discussion.

OpenAI and Anthropic pioneered the Chat and Agent revolutions.

But much of the experimentation around context graphs, memory systems, second brains, organizational knowledge layers, and persistent AI identity is happening in open source and startups.

That is historically consistent. Incumbents are often trapped by the architecture and incentives of the previous revolution.

  • Microsoft missed mobile.
  • Google struggled with social.
  • Meta struggled with search.
  • Amazon struggled with consumer social products.

Winning one layer often blinds companies to the next layer. Especially when the next layer threatens their current product structure. The Context Revolution requires fundamentally different thinking.

Not “better models.” Better continuity. Better memory architectures. Better identity systems. Better organizational knowledge graphs.

The winners here may look nothing like today’s AI leaders.

Revolution 4: Platforms

Every major technology wave eventually consolidates into a platform.

  • The PC revolution produced Windows.
  • Mobile produced the App Store.
  • Social produced creator platforms.
  • E-commerce produced Amazon marketplaces.

AI will do the same. But AI platforms failed early because they lacked the one thing platforms require: differentiated supply. Custom GPTs were not enough. Agent marketplaces were not enough.

Most agents today are disposable because they lack persistent context. But once context becomes standardized and deeply integrated, something changes. Users stop interacting with isolated apps.

Instead, they interact with intelligence built directly on top of their memory layer.

Their meetings. Their documents. Their workflows. Their relationships. Their company history. Their personal behavior.

That creates dramatically higher quality agents and generative applications. And whoever owns that substrate gains an extraordinary advantage. Because they automatically become the best place to build AI products. Then naturally, they become the best place to distribute them. Then they become the best place to monetize them.

That is how platforms form.

Why This Winner Could Become the Largest Company Ever

The final implication is the one most people still underestimate.

The winning AI platform may not simply become another software company. It may become an index on digital labor itself. Previous technology giants indexed industries.

  • Apple indexed mobile software.
  • Amazon indexed e-commerce.
  • Google indexed information retrieval.

But an AI platform built on persistent context and agentic execution indexes something much larger:

Human work. Agents replace increasing amounts of digital labor. Generative systems replace increasing amounts of software. As robotics matures, the physical economy increasingly becomes programmable too.

That means the ultimate AI platform is not indexing one market. It is indexing every market touched by intelligence. Which is nearly all of them. That is why this cycle is different. This is not another SaaS wave.

It is not another cloud cycle. It is an Intelligence Revolution. And the most important company of the next decade may not be the one with the best model. It may be the one that remembers you best.  

How Vibe Coding changes the quality of output for non developers

The core argument is straightforward but consequential: “vibe coding” collapses the distance between idea and software, transforming code from a specialized production activity into a personal, iterative medium. The implication is not just faster development. It is a structural shift in where value accrues. When anyone with taste and intent can produce working software on demand, the scarcity moves away from coding skill and toward vision, distribution, and control of interfaces.  

Vibe coding personal app store

The first inflection point is technical but foundational. Coding agents are no longer passive assistants generating fragments of code; they are active operators embedded in the execution environment. They read and write files, run shell commands, orchestrate processes, debug failures, and iterate toward completion. This changes the unit of work from “writing code” to “producing outcomes.” The user no longer stitches together tools across GitHub, cloud services, and deployment layers. The agent absorbs that integration complexity. The practical result is a collapse in activation energy. What once required setup, expertise, and patience can now begin with a prompt.

The second inflection point is conceptual. Software shifts from being built for markets to being built for individuals. The “personal app store” is less about distribution and more about orientation. Software becomes disposable, replaceable, and highly specific to the user’s needs. A workout tracker is not a product category; it is a custom artifact tailored to one person’s habits, preferences, and data. This reframing weakens the traditional advantage of generalized apps, especially in long-tail use cases where customization matters more than polish.

The third inflection point is organizational. Historically, software was mediated by teams, which introduced friction but also discipline. Vision had to be negotiated, explained, and constrained by engineering realities. With coding agents, that mediation layer disappears. A single operator can iterate rapidly without needing to justify every change. This produces sharper alignment between intent and output, but it also removes the implicit safeguards that come from collaboration. The product becomes a direct extension of the creator’s taste, for better or worse.

The fourth inflection point extends beyond software into platform dynamics. If users increasingly interact with agents rather than applications—issuing commands instead of navigating interfaces—the locus of control shifts. The operating system and app store become less central as the primary interface layer moves to the agent. In that scenario, Apple’s advantage in curated apps and polished interfaces diminishes. What remains is hardware differentiation and ecosystem integration, both of which historically command lower margins than software-driven monopolies.

These shifts are not free. The speed of vibe coding comes at the expense of robustness. While agents can produce functional software quickly, they struggle with architectural integrity at scale. As codebases grow, context limits force approximations. Models lose track of dependencies, apply superficial fixes, and occasionally resolve bugs by removing functionality altogether. The human operator must step in as an architect, guiding structure and enforcing coherence. In effect, the bottleneck moves from writing code to maintaining system integrity.

There is also a trade-off between creative purity and collective intelligence. Removing team friction enables rapid iteration and uncompromised vision, but it eliminates dissent and critique. In traditional development, disagreements often surface flaws early. In a solo, agent-assisted workflow, those checks are absent. The system optimizes for alignment with the user’s intent, not for correctness or optimality.

Another constraint lies in distribution. Personal software is powerful precisely because it is private and unconstrained, but it does not easily scale. Platform gatekeepers still control access to mass audiences. While agents can generate apps, they cannot yet bypass the economic and regulatory structures that govern distribution, payments, and trust. The personal app store remains a compelling concept, but it is not a replacement for public ecosystems.

The underlying flywheel is clear. As agents reduce the cost of creation, more individuals experiment with building software. Increased usage generates more data, feedback, and edge cases, which improve the models. Improved models expand the range of solvable problems, attracting more builders. This cycle compounds, enabling progressively more ambitious applications with smaller teams. Over time, entire categories of software development—prototyping, debugging, even customer support—become automated loops managed by agents and overseen by humans.

The most significant blind spot in this narrative is the assumption that interface abstraction leads directly to platform displacement. Even if agents become the primary interaction layer, they still depend on underlying systems for identity, security, payments, storage, and device access. These are not trivial layers; they are where trust and economic control reside. Apple’s moat is not limited to user interfaces. It includes a tightly integrated stack that governs how software interacts with users and with other systems. Agents may bypass the front-end experience, but they cannot easily replace the infrastructure of trust that underpins it.

In that sense, vibe coding represents both a democratization of creation and a redistribution of power. It lowers the barrier to entry dramatically, enabling individuals to build and iterate at unprecedented speed. But it does not eliminate the need for governance, architecture, or distribution. It simply moves those challenges to a different layer, where the stakes—and the competitive dynamics—may be even higher.

Where Is the Money in AI? The Real Economics of the “AI Supercycle”

The AI “Supercycle”

AI Supercycle refers to a multi-decade technological and economic cycle in which artificial intelligence drives sustained, large-scale investment, innovation, and value creation across the entire technology stack—spanning energy, semiconductors (chips), infrastructure (cloud and data centers), foundation models, and applications.

It is characterized by massive capital expenditure (capex) in compute and infrastructure, rapid advancements in model capabilities, and the gradual shift of economic value from foundational layers (e.g., GPUs and cloud) toward higher-margin application and software layers—similar to prior supercycles like the internet, mobile, and cloud.

At its core, the AI supercycle represents a structural transformation of how software is built, distributed, and monetized, where intelligence becomes a programmable, scalable resource embedded across industries, workflows, and consumer experiences.

The biggest question in artificial intelligence right now is not whether AI is important. That part is obvious. The real question is much harder: where is the money in AI actually accruing?

AI is in a full-stack economic supercycle — one that touches semiconductors, data centers, cloud infrastructure, foundation models, inference, applications, agents, consumer software, enterprise software, and energy.

The central observation is simple: the AI ecosystem does not yet look like prior technology supercycles.

In the internet, mobile, and cloud eras, the long-term value eventually moved upward into software and applications. Software businesses became extraordinarily valuable because they had near-zero marginal costs. Build once, distribute globally, enjoy 80% to 90% gross margins. That was the classic SaaS and cloud software model.

AI is different.

Every incremental AI user consumes real compute. Every prompt burns GPU cycles. Every inference request has a cost. That changes the economic structure of the entire industry.

Right now, the AI value stack looks like an inverted triangle. The largest and most profitable value is concentrated at the bottom: semiconductors, GPUs, data centers, power, memory, networking, and infrastructure. The application layer is growing fast, but it is still relatively small and often much less profitable.

That is why Nvidia has become the defining company of the AI era so far. Its data center business captures enormous demand from hyperscalers, model labs, AI startups, and enterprises. Nvidia’s gross margins are far higher than most AI application companies because it sits at the scarce, bottlenecked layer of the stack.

Meanwhile, many AI application companies may be growing revenue rapidly but still face hard gross margin questions. Unlike traditional software, AI applications are not free to serve. The marginal user is expensive. That is one reason several large-scale AI businesses can reach billions in revenue while still having uncertain profitability.

This creates the core AI investing question: when does the triangle flip?

In cloud computing, it took many years for infrastructure investment to translate into massive software value creation. AWS began its journey in 2006-2007, landed major customers like Netflix years later, and eventually became one of the most important profit engines in technology. That transition took roughly a decade.

AI may take as long — or longer.

The reason is that the substrate is harder. AI needs GPUs, power, data centers, memory bandwidth, networking, model training, inference optimization, and constant capital investment. This is not just software distribution. It is industrial-scale computing.

Image credit Nvidia

One major debate is whether the current AI capex boom is simply building capacity for future application revenue. The optimistic view is that today’s infrastructure buildout is like laying railroads. The tracks have to be built before the economy can form around them. The skeptical view is that hyperscalers may overbuild if application revenue and profitability do not catch up fast enough.

That makes hyperscaler capex guidance one of the most important signals in AI. Microsoft, Google, Amazon, Meta, and others are effectively telling the market how much conviction they have in future AI demand. If those numbers continue rising, the buildout continues. If they slow sharply, it may signal that the current equilibrium is under pressure.

Another major theme is the split between training and inference. Training frontier models is capital-intensive but relatively predictable. Inference is different. It is bursty, user-driven, and tied to real-world usage. As AI moves from demos to daily workflows, inference should become a larger share of compute demand. That shift matters because inference economics will determine whether AI apps can become durable, profitable businesses.

It also raises a critical question about consumer AI: can ChatGPT, Gemini, Claude, and similar products become as large as Google Search, YouTube, WhatsApp, Instagram, or TikTok?

ChatGPT has already reached massive scale, but scale alone is not enough. The key questions are monetization and frequency. Google and Meta monetize billions of users through ads at high annual revenue per user. AI apps currently monetize far less per user, and many users are still free. Subscription revenue is meaningful, but it may not be enough to support the full economics of consumer AI at global scale.

That points to a likely future debate: will AI eventually become an advertising business?

Today, that feels uncomfortable. People do not want ads interrupting a personal AI conversation. But the same skepticism existed during the Facebook mobile transition. Critics argued that ads would not work on phones because screens were too small. They were wrong. The ad model adapted.

AI may produce a new kind of advertising model built around intent, context, trust, and attribution. If a user asks an AI assistant for help choosing software, booking travel, buying insurance, selecting a school, or planning a purchase, the commercial intent is extremely high.

If platforms can insert monetization without destroying user trust, advertising could become one of the biggest unlocks in AI economics.

The enterprise AI market has its own questions. Incumbents like Salesforce, Palantir, Microsoft, Adobe, ServiceNow, and others are adding AI features into existing platforms. These companies may not always show up cleanly as “AI application revenue,” but their AI usage flows through model providers, cloud infrastructure, and inference spend. The AI transformation of incumbents may therefore be partly hidden inside existing software budgets.

The most competitive layer appears to be the middle of the stack: inference platforms, AI infrastructure startups, model serving, orchestration, optimization, and developer tooling. This layer has many promising startups, but it also faces existential pressure from hyperscalers. The key question for each company is: are you a feature or a platform?

If a capability naturally belongs inside AWS, Azure, Google Cloud, OpenAI, Anthropic, or Nvidia, it may be difficult to build a standalone company around it. But if it becomes a control point across models, clouds, workloads, and applications, it may become a durable platform.

The most important takeaway from this is that AI should be analyzed as a full-stack economic system, not as a collection of exciting apps. The right questions are not just “what can this model do?” or “what startup is growing fast?” The better questions are:

Where does value accrue?

Who has pricing power?

Which layer has scarcity?

Which businesses have durable gross margins?

Which costs decline with scale, and which costs increase with usage?

Which companies are platforms, and which are features?

AI is not a fad. But the economics are not settled. The infrastructure layer is winning now. The application layer is growing quickly but still has to prove profitability. Consumer AI needs a stronger monetization engine. Enterprise AI must show measurable productivity gains. Inference needs to become cheaper and more efficient. And the entire ecosystem has to determine whether this inverted triangle eventually flips.

That is where the money in AI will be decided.

The personal blog of Mukund Mohan