All posts by Mukund Mohan

My discipline will beat your intellect

Vibe Coding Goes Mainstream: When Software Meets Imagination

🌀 The Year Coding Changed Forever

When Collins Dictionary  declared “Vibe Coding” the word of the year for 2025, it marked more than a trend — it confirmed a cultural shift in how software gets made.

Once a niche inside developer Discords and AI labs, vibe coding has become a mainstream creative movement.

It’s the moment when programming stopped being about syntax and started being about conversation.

Instead of learning to code, millions are now learning to collaborate with intelligence.


⚙️ What Is Vibe Coding?

At its core, vibe coding means using AI to build apps through natural language or voice prompts.

You describe what you want — an app, website, workflow, or game — and the AI builds it.

You review the results, suggest changes, and iterate until it feels right.

As Andrej Karpathy — who first popularized the term in February 2025 — described it:

“You stop coding and start describing. You give in to the vibes.”

That surrender is what makes it powerful — and polarizing.

Champions say it democratizes creation.

Critics say it produces code you can’t understand.

Both are right.


🧩 The Two Layers of Vibe Coding

Vibe coding introduces a new division of labor in software:

1️⃣ The Vibe Layer — Humans describe intent. “Build me a booking app with Stripe and Supabase.”

2️⃣ The Verification Layer — Engineers validate what’s built, test logic, ensure security, and deploy.

AI builds; humans supervise.

It’s no longer just writing code — it’s orchestrating it.


🧠 The Tool Stack of the Vibe Era

Mashable’s roundup highlights the most popular tools powering this new workflow — and they reflect how fast the ecosystem is maturing.

Claude Code (Anthropic)

Optimized for reasoning, multi-file edits, and safety. Ideal for step-by-step builds with real context.

Used by developers who want conversational debugging without chaos.

GPT-5 (OpenAI)

The powerhouse for “agentic coding” — giving you full, working apps from a single prompt.

Beginners love it for speed; pros use it for scaffolding entire backends.

Cursor IDE

Think of it as VS Code with an AI copilot that actually understands context.

You can import libraries, fix bugs, and chat directly with your codebase.

Lovable

The creative’s choice. Build stunning frontends through conversational design.

Ideal for marketers, designers, or founders who want something beautiful that just works.

v0 by Vercel

A UI-generation tool that turns natural language into deployable components — perfect for web apps and prototypes.

Often paired with Claude Code for “backend + frontend” synergy.

Opal (Google)

A beginner-friendly playground for vibe coding — visual, safe, and powered by Gemini, Imagen, and Veo.

21st.dev

A companion tool for building and exporting UI components that integrate seamlessly with your AI-generated app.


🌍 Why This Matters

Vibe coding doesn’t just make app creation faster — it changes who gets to build.

You no longer need to know Python or React to bring an idea to life.

You just need clarity of thought and creative direction.

That’s a profound shift.

Because when intent becomes the new interface, the line between founder and developer starts to blur.


🚀 The New Workflow

1️⃣ Describe your idea.

2️⃣ Let AI build the foundation.

3️⃣ Iterate through conversation.

4️⃣ Deploy with one click.

From there, you’re not just coding.

You’re conducting creation.


🔮 The Future of Software Creation

In the old world, code was the bottleneck.

In the new world, imagination is the constraint.

Vibe coding is still messy — but it’s a glimpse of what’s coming:

software that feels more like storytelling than engineering.

The next generation won’t say, “I built an app.”

They’ll say, “I described one.”

And that might just be the biggest paradigm shift in tech since the browser.


🧭 Final Thought

Vibe coding won’t replace developers.

It will amplify them — and invite everyone else to join the creative process.

As Karpathy said:

“Vibe coding is what happens when creativity stops waiting for permission.”

The Best Tech Stack for Vibe Coding Your First App

How to start building today

Zero experience. One idea. Built with AI.

If you’ve ever had an idea for an app but didn’t know where to start — welcome to vibe coding.

Vibe coding is what happens when AI turns app-building into a conversation. You don’t need to be an engineer. You just need to describe what you want, and your AI co-pilot builds it.

The key is knowing which tools to give your AI so it has the right foundation.

Here’s the exact stack I use (and recommend to every first-time builder) — every tool here has a free tier, is AI-friendly, and scales beautifully from V1 to real users.


⚡ 1. Web Framework: Next.js

Why it matters:

Next.js is the backbone of modern web development. It’s fast, SEO-friendly, and integrates natively with Vercel (your hosting layer).

Why it works for vibe coding:

AI models like GPT-5 or Claude Code “understand” Next.js exceptionally well — meaning they can scaffold full projects, add routes, and manage components with almost no clarification needed.

In short:

Next.js is the easiest way to go from text prompt → production-ready web app.


🧱 2. Database: Supabase

Why it matters:

Supabase is an open-source alternative to Firebase — with built-in authentication, APIs, and Postgres under the hood.

Why it works for vibe coding:

You can ask your AI to “set up a Supabase table for users, posts, and comments” and it’ll know exactly how to do it. Supabase even generates REST and GraphQL APIs automatically — perfect for AI to plug into.

In short:

It’s the most “AI-friendly” database layer — powerful, flexible, and scalable.


🔐 3. Authentication: Clerk

Why it matters:

User auth is where most first-time builders get stuck — login, signup, forgot password, etc.

Why it works for vibe coding:

Clerk provides pre-built components for user accounts, session management, and multi-provider sign-ins (Google, GitHub, etc.). Your AI can drop them right into your Next.js app without manual configuration.

In short:

Clerk handles the painful part of “who’s allowed in,” so you can focus on building the experience.


💳 4. Payments: Stripe

Why it matters:

If you’re building anything people will pay for — SaaS, memberships, digital goods — Stripe is still the gold standard.

Why it works for vibe coding:

Every major AI model understands Stripe’s API docs inside and out. You can literally say:

“Add monthly billing using Stripe for $20/month per user.”

…and the AI will wire it up end-to-end.

In short:

Stripe gives your app superpowers — and turns your project into a business.


🎨 5. Styling: Tailwind CSS

Why it matters:

Tailwind makes styling fast and consistent — no more wrestling with CSS files.

Why it works for vibe coding:

Tailwind is pattern-based. When your AI writes className=”bg-blue-600 text-white p-4 rounded-lg”, you instantly get a professional-looking UI. It’s predictable, composable, and ideal for AI-generated design.

In short:

It’s the “design language” that AI speaks fluently.


🌍 6. Hosting: Vercel

Why it matters:

You need a place to deploy your app that doesn’t require server configuration, Docker, or DevOps.

Why it works for vibe coding:

Vercel was built by the same team behind Next.js — the integration is seamless. You can deploy your app from GitHub in one click, and your AI can even handle setup through Vercel’s API.

In short:

It’s the smoothest “idea → live app” pipeline on the internet.


✉️ 7. Emails: Resend

Why it matters:

Almost every app needs transactional emails — confirmations, password resets, onboarding messages.

Why it works for vibe coding:

Resend has one of the cleanest APIs out there, and it’s built for developers who want deliverability without complexity. Your AI can easily plug in Resend for all your app’s outbound communication.

In short:

No more debugging SMTP — it just works.


🧠 8. AI Logic Layer: GPT-5 + Claude Haiku

Why it matters:

You’ll often need to add intelligent behavior — summarizing data, generating text, parsing user input, etc.

Why it works for vibe coding:

Use GPT-5 for tough reasoning or complex multi-step builds.

Use Claude Haiku for lightweight tasks — it’s cheaper and faster.

In short:

Think of GPT-5 as your architect, and Haiku as your assistant. Together, they let your app “think.”


🧑‍💻 9. 3D Assets: Three.js

Why it matters:

If you want to add visual magic — interactive scenes, product demos, or dashboards — Three.js is your toolkit.

Why it works for vibe coding:

AI models can generate scenes, shapes, lighting, and even animations in Three.js with simple descriptions.

In short:

Three.js turns creative ideas into immersive experiences.


🪄 10. The Builder: Claude Code

Why it matters:

Claude Code (by Anthropic) is my go-to AI development companion. It’s context-aware, remembers files, and can reason across entire codebases.

Why it works for vibe coding:

You can feed it your stack, describe what you want, and it handles setup, scaffolding, and iteration — almost like having a calm, senior engineer pair-programming with you.

In short:

Claude Code turns this stack into something living — an AI-native development environment.


🪄 11. The Builder: Claude Code

Why it matters:

Claude Code (by Anthropic) is my go-to AI development companion. It’s context-aware, remembers files, and can reason across entire codebases.

Why it works for vibe coding:

You can feed it your stack, describe what you want, and it handles setup, scaffolding, and iteration — almost like having a calm, senior engineer pair-programming with you.

In short:

Claude Code turns this stack into something living — an AI-native development environment.


🚀 How to Get Started

  1. Copy this stack.
  2. Paste it into Claude Code or your AI of choice.
  3. Give it your app idea (or ask it to brainstorm one).
  4. Say: “Start building.”

You’ll have a working MVP — database, auth, payments, UI — in hours, not weeks.

No tutorials. No setup hell. Just conversation → creation.


💭 Final Thought

The hardest part of building apps used to be coding.

Now it’s deciding what to build.

We’re no longer learning syntax — we’re learning how to collaborate with intelligence.

Welcome to the new creative frontier:

Vibe Coding.

Claude Code Web Just Made the 10X Engineer 100X – the real take

The uncomfortable truth nobody wants to say out loud:

Claude Code Web’s multi-threading capability doesn’t democratize coding—it weaponizes talent disparity.

Here’s why this is the most polarizing development in software engineering since GitHub Copilot, except way more brutal:

For the 10X engineer: You’re not just 10X anymore. You’re 100X. Why? Because while mediocre developers are still figuring out how to write a coherent prompt for ONE task, you’re orchestrating six parallel threads like a conductor leading a symphony. You understand:

  • Which tasks can run independently
  • How to structure problems for maximum parallelization
  • How to review and integrate multiple streams of AI output simultaneously
  • When to let threads run and when to intervene

Your systems thinking and architectural intuition now have a force multiplier that compounds exponentially. You’re not coding anymore—you’re conducting an orchestra of AI agents.

For everyone else: The skill gap just became a chasm. The “democratization of coding” narrative is a lie. Claude Code Web is actually a skill amplifier, and amplifiers make disparities worse, not better.

Think about it: Give a novice a power tool, they might hurt themselves. Give a master craftsman that same tool, they’ll build a cathedral in a weekend.

Why This Take Makes People Furious

The optimists hate it because it suggests AI won’t level the playing field—it’ll tilt it further toward those who already excel.

The traditionalists hate it because it admits that the future belongs to those who can think in parallel, manage complexity, and trust AI—not those who can write the most elegant manual code.

The “learn to code” crowd hates it because it suggests that raw coding skill matters less than orchestration, architecture, and prompt engineering—skills that take years of experience to develop.

The Nuclear Take

The 1X engineer using Claude Code Web will feel productive. They’ll ship features. They’ll think they’re keeping up.

But the 10X engineer? They’re building entire systems in the time it takes others to build features. They’re parallelizing not just tasks, but learning—running multiple experiments, testing architectures, exploring solutions simultaneously.

The productivity gap isn’t additive. It’s multiplicative. And multi-threading just added another multiplier to the equation.

Welcome to the era where “knowing how to code” is table stakes, and “knowing how to orchestrate complexity” is what separates the architects from the assembly line.

The reality is a lot more nuanced.

The Multi-Threading Reality Check

What’s genuinely true:

  • Multi-threading in Claude Code Web is a legitimate productivity multiplier for specific types of work
  • Being able to run parallel tasks (different features, tests, documentation, refactoring) simultaneously is powerful
  • People who understand system architecture and task decomposition will extract more value from this

What’s overstated in my hot take:

  • The “100X” multiplier is absurd hyperbole. Real productivity gains are meaningful but not magical
  • Most experienced developers were already good at context-switching and managing multiple concerns—this just makes it more explicit
  • Junior developers can absolutely learn to think in parallel; it’s not some mystical gift

The Actual Democratization Story

Here’s the nuance: AI tools like Claude Code Web are both democratizing AND widening skill gaps. Both things are true simultaneously.

Democratization that’s real:

  • Someone with a CS degree from 2010 who’s been out of the loop can get back up to speed faster
  • People can build prototypes and MVPs without needing a full team
  • The barrier to “I have an idea” → “I have a working thing” is genuinely lower
  • Junior developers can learn by doing, faster than ever before

Skill gaps that are real:

  • Experienced engineers produce higher-quality architecture when using AI tools
  • Code review skills matter MORE now, not less (you need to spot AI mistakes)
  • Understanding what to build and why still requires domain expertise
  • Prompt engineering is just communication skills + technical knowledge—which experienced people have more of

The Thing Nobody Talks About

The real difference-maker isn’t raw talent or experience—it’s judgment.

With Claude Code Web’s multi-threading, you need to know:

  • Is this task actually parallelizable or will it create merge conflicts?
  • Which threads need my attention first?
  • When is the AI going down the wrong path and needs correction?
  • What’s the right level of granularity for task decomposition?

This is learnable. It’s not magic. But it does take practice.

The Uncomfortable Middle Ground

The honest truth is probably something like:

For experienced developers: You’ll see 2-5x productivity gains on the right kinds of projects. You’ll still hit walls. You’ll still need to think hard about architecture. But yeah, you’ll ship faster.

For junior developers: You’ll learn faster and ship more than junior developers of previous generations. But you might also develop some bad habits if you don’t understand what the AI is doing. Your growth depends on how much you interrogate the code, not just accept it.

For the industry: We’re probably going to see a bifurcation:

  • Teams that learn to leverage these tools effectively will massively outperform
  • Teams that treat AI as a magic wand will produce buggy, unmaintainable code faster than ever
  • The “10X engineer” concept might evolve into “10X teams” who know how to orchestrate both humans and AI

What This Really Means

The multi-threading capability is less about individual genius and more about:

  1. Workflow optimization – Can you structure your work to take advantage of parallelism?
  2. Risk tolerance – Are you comfortable letting AI run while you focus elsewhere?
  3. Integration skills – Can you merge multiple streams of work coherently?

These are learnable, practicable skills. They’re not reserved for some mythical 10X engineer.

The Actually Controversial Take

Here’s what might genuinely be controversial: The era of the lone wolf “10X engineer” might actually be ending.

Why? Because the most effective use of multi-threaded AI coding is collaborative. The best outcomes will come from:

  • Teams that can decompose problems together
  • Engineers who can review AI output quickly and effectively
  • Organizations that can create feedback loops between multiple AI threads and multiple human reviewers

The “100X engineer” narrative assumes scaling is individual. But maybe the real story is that effective AI tooling makes collaboration scale in ways we haven’t seen before.

AI is optimized for the Median. The future belongs to the outliers

LLMs aren’t “creative.”

They’re statistical compression of everything that already exists.

They don’t invent—they average. Their job is literally:

“Given all the design patterns in the world, what’s the most probable next pixel, word, or layout?”

That means AI is gravitationally pulled toward the center of the bell curve. It is designed to be safe.

And safe is the enemy of great.


What AI naturally produces:

Layouts that look familiar. Color palettes that “work” Components that follow convention. Slightly-above-average Dribbble-style UIs. Translation: Pleasant, polished, predictable. Good enough to impress non-designers. Not good enough to move culture.


What AI instinctively avoids (because the data rarely contains it):

  • Bold new patterns nobody has tried
  • Weird layouts that break grids
  • Pixel-perfect craft (it blurs details)
  • High information density (it simplifies)
  • High constraint problems (logos, complex UI)
  • Subtle brand identity
  • Clever metaphor or symbolism
  • Risky opinions

Why? Because the median is safe. And the median dominates the training data.


AI = Safe.

Great Design = Opinionated. Great design is not the “average of what’s been done.” Great design is a decision, a stance, a refusal to blend in. AI is optimized to remove risk. Humans are valuable because we’re willing to take it.


Design is about taste—and taste can’t be averaged.

AI finds the midpoint. Taste lives at the edges. Taste says:

“Everyone is doing X… so I’ll do Y.”

Taste is judgment under constraints—not pattern recall.


The Beige Flood is Coming.

As AI becomes more accessible:

  • Every founder can generate “good-looking” screens.
  • Every template will feel AI-polished.
  • Every landing page will look interchangeable.

We will drown in pleasant, soulless UI. Everything will look… fine. And fine is the new ugly.


In that world, human taste becomes a luxury good.

When everything is mass-produced and “pretty,” What stands out? Not polish. Personality. Not symmetry. Story. Not correctness. Character. In a sea of AI beige, the rare work with soul, sharpness, or a strong opinion will feel electric. People will pay for it. Brands will compete for it. Users will crave it.


AI will dominate production. Humans will dominate direction.

AI can generate infinite “good.” But only humans can define what “great” even is. The winners aren’t the ones who prompt the fastest. They’re the ones who see what AI cannot see. The edge. The vibe. The future.


Conclusion:

AI is the new baseline.

Outliers are the new competitive advantage.

If you’re just “good,” AI will replace you. If you’re bold, specific, opinionated, weird, wildly human? AI can’t follow you there.That’s where the future is.

Learning requires discomfort. How to succeed with Vibe Coding

The New Rules of Vibe Coding: Why “Easy” Is Making You Worse

In 2019, coding education had one enemy: tutorial hell.

You’d watch 6-hour videos, code along flawlessly… and then freeze the moment you had to build something from scratch.

So we fixed that.

We built interactive courses, hands-on projects, fewer videos. Tutorial hell faded away.

But something new took its place.

Welcome to Vibe Coding Hell.

This time, we can build things—sometimes shockingly cool things.

But they’re built with AI, for AI, and under AI supervision.

“I can’t build without Cursor.”

“Claude wrote 6,379 lines to lazy-load my images—must be right?”

“Here’s my project: localhost:3000.”

The problem isn’t output.

The problem is mental models.

Projects are shipping, but understanding is not.

And here’s the uncomfortable truth:

Learning only happens when you feel discomfort.

Tutorial hell let you avoid discomfort by watching someone else code.

Vibe coding hell lets you avoid discomfort by letting AI code for you.

Both lead to the same outcome:

You don’t wrestle with the problem. Your brain never rewires.

“But AI makes me more productive!”

Maybe. Maybe not.

A 2025 study found developers believed AI made them 20–25% faster…

…but in reality, AI slowed them down by 19%.

Speed without understanding is an illusion.

It’s motion, not progress.

And the psychological risk is even bigger:

“Why learn this? AI already knows it.”

If AI doesn’t take our jobs, demotivation will.

The New Rules of Vibe Coding (If you actually want to learn):

❌ Don’t use AI to write the code for you.

No autocomplete. No agent mode. No “build the whole feature.”

✅ Do use AI to think with you.

Explain this. Challenge me. Ask me questions. Show me another approach.

❌ Don’t ask for step-by-step instructions.

That’s just a tutorial with extra steps.

✅ Do ask: “What am I missing?” or “Where could this break?”

Force your brain into active problem-solving.

❌ Don’t accept AI’s first confident answer.

LLMs are sycophants. They’ll tell you what you want to hear.

✅ Do demand sources, real-world examples, and opposing opinions.

That’s where real learning lives.

The Hard Truth

Learning must feel uncomfortable.

Not because struggle is noble.

Because struggle triggers growth.

When you’re stuck, frustrated, and pushing through uncertainty—that’s your neural network literally rewiring.

AI shouldn’t remove that pain.

AI should sharpen it into clarity.

If AI makes coding effortless, it’s making you weaker.

If AI makes thinking deeper, it’s making you unstoppable.

Vibe Coding isn’t the problem.

Vibe Coding without discomfort is.

The future belongs to the developers who learn how to use AI as a thinking partner—

not a crutch.

Real work. Real struggle. Real skill.

That’s the new vibe.

Speed as a moat for startups – the new defensible positions for early stage companies

Founders are obsessed with moats right now—and for good reason. In a world of near-infinite competition, margins trend to zero unless you can defend something real. But here’s the uncomfortable truth: early on, the only moat you actually have is speed.

Not “we ship fast-ish.” I mean Cursor-level speed—one-day sprints in 2023–2024—while big companies take weeks, months, sometimes years to push features through PRDs and committee. In greenfield markets where nobody knows which products matter yet, the team that cycles daily and learns fastest wins the right to worry about moats later.

Speed is missing from Hamilton Helmer’s Seven Powers, but it shouldn’t be. It’s the gateway power. Ship relentlessly; make something people truly want; then stack the classic moats as you scale. That’s the actual sequence. If you’ve got nothing valuable yet, your “moat” is just a puddle.

Once you have traction, process power shows up first. Think of what banks demand from AI agents handling KYC or loan origination. A hackathon demo gets you 80% of the way with 20% of the effort; production-grade reliability on tens of thousands of decisions per day requires the last 1–5% to work almost all the time—and that last mile takes 10–100× the effort. That drudgery is a moat. Plaid-style surface area across thousands of financial endpoints, CI/CD that never breaks, evals that catch edge cases—this is why Stripe, Rippling, and Gusto are hard to copy. Better engineering, done repeatedly, compounds.

Cornered resources come next. Sure, in pharma that’s patents. In modern AI, it’s privileged access: regulated buyers, DoD environments, or proprietary customer workflows and data you collect by being a forward-deployed engineering team. That proprietary data lets you tune models and prompts so your unit economics improve—Character-AI-style 10× serving cost reductions are the blueprint. Having your own best-in-class model helps, but it’s not mandatory on day one; careful context engineering will get you 80–90% of what customers need for the first two years.

Switching costs are evolving, too. The old world was Oracle or Salesforce: migrating schemas and retraining a sales org could cost a year of productivity. LLMs will lower those data-migration costs, but AI startups are creating a new lock-in: months-long onboarding that encodes custom logic and compliance into agents. Six- to twelve-month pilots that convert to seven-figure contracts make a second bake-off irrational. On the consumer side, memory is becoming sticky—tools that actually remember you raise the pain of leaving.

Counter-positioning is quietly lethal. Incumbent SaaS sells per seat; good agents reduce seats. The better their AI, the more revenue they cannibalize. Startups price on work delivered or tasks completed—and then they actually deliver. That culture shift is nontrivial for late-stage incumbents. Second movers who out-execute often win: legal AI teams focusing on application quality over fine-tuning aesthetics; customer support agents like Giga ML that “just work” faster in onboarding. Agents also have superhuman edges: instantly handle 200 languages, infinite patience on bad connections. In vertical SaaS, this flips wallet share: from ~1% “software” take to 4–10% when you absorb operations (AOKA’s HVAC support example). That’s not a feature; that’s a business model moat.

Network effects in AI look like data flywheels and eval pipelines, not just “more friends = more fun.” The more usage you have, the more ground-truth failures you capture, the better your prompts, tools, and models get. Cursor’s telemetry—every keystroke improving autocomplete—compounds quality. Brand still matters (ask Google how it feels to chase ChatGPT), but the durable edge is usage → data → better product → more usage.

Finally, scale economies mostly live at the foundation layer. Training frontier models and crawling large slices of the web (think EXA’s “search for agents”) are capital-intensive, with low marginal costs at scale. Even with DeepSeek-style RL efficiencies, the base models remain expensive—another reason application-layer speed matters early.

So here’s the playbook. Find existential pain—work that’s so broken someone’s promotion or business is on the line. Ship daily until you own that pain. Use the speed moat to earn time, users, and cash. Then layer in process power, cornered resources, switching costs, counter-positioning, network/data effects, and—when relevant—scale. Think five years out, sure, but execute like you only have five days. Because in the beginning, you do.

Is ChatGPT sending more customers or Google for B2B customers?

Overview

Recent analysis of client web traffic compared traditional Google organic search sessions with attributable traffic from AI tools (primarily ChatGPT, with smaller volumes from Perplexity and others). The goal was to understand the ratio of SEO-driven traffic to AI-driven traffic and identify implications for marketing strategy.

Key Findings

  1. SEO traffic remains strong and growing. Across clients, organic sessions from Google continue to trend upward. In multiple cases, traffic has scaled from under 100 sessions to several thousand per month. Publishing high-intent, bottom-funnel content continues to drive measurable growth.
  2. AI traffic is rising but remains small. AI referrals began appearing in mid-2024 and show steady growth. However, the volume remains modest:
    • On average, AI traffic is ~3% of SEO traffic.
    • Most accounts fall in the 2–5% range, with outliers as low as 0.2% and as high as 7%.
    • Nearly all AI traffic originates from ChatGPT, though conversions sometimes come from other platforms like Perplexity.
  3. Perceived SEO decline is often attributional. Slight declines in organic traffic have been observed in some mature accounts. However, these dips often coincide with increases in branded search traffic. This suggests users may discover companies through AI overviews or AI search results but then navigate via direct or branded searches rather than clicking organic links.
  4. Conversions do not align directly with traffic. While ChatGPT contributes the majority of measurable AI sessions, conversions are often driven by other tools. This highlights the need to evaluate AI channels on conversion performance, not traffic volume alone.
  5. Attribution challenges are intensifying. Cookie consent, privacy changes, and shifts in user click paths make it harder to tie conversions directly to SEO or AI sources. Many conversions appear as “direct/none,” despite being influenced by search or AI exposure.

Strategic Implications

  • SEO is not dead. Organic growth remains consistent, and claims of its demise are not supported by the data.
  • AI traffic is complementary, not a replacement. It is increasing but represents a single-digit share of SEO volume and conversions.
  • Brands should view SEO and AI as interconnected. Visibility in search engines feeds AI discovery, and vice versa. Both channels ultimately contribute to brand awareness and lead generation.
  • Conversion measurement must evolve. Teams should place greater emphasis on overall lead growth and blended attribution, rather than expecting precise channel-level credit.

Conclusion

Current evidence shows SEO continues to be a primary driver of traffic and conversions, while AI referrals are emerging but still limited in scale. The most effective strategy is not choosing between SEO and AI but understanding how the two reinforce each other in driving brand discovery and measurable outcomes.

How to Increase AI Visibility (Common Mistakes People Make)

Featured

I just watched a great episode from The Grow and Convert Marketing Show that breaks down the exact question many of us in marketing have been asking: what should we actually do to increase our AI visibility? The episode cuts through the noise and fearmongering from some of the AI visibility tools and gives a clear, practical framework you can use today. Here’s the short, friendly recap I’d share with a colleague—what I learned, what to avoid, and a simple plan you can implement this week.

Why marketers are suddenly anxious about AI visibility

First, the context: a bunch of CMOs, founders, and marketing leads are opening up AI visibility dashboards, seeing competitors “winning,” and getting understandably nervous. The common pattern is this: an SEO/AI tool runs a bunch of prompts, tallies how often your brand is mentioned in AI overviews, spits out a single percentage or “share of voice,” and then you look bad on paper.

That panic is often misplaced. The episode makes a core point I agree with: these tools tend to prioritize quantity over quality. They measure how frequently your brand appears across a wide net of prompts—but they don’t judge whether those prompts actually matter to your business. In other words, a high visibility score that’s driven by irrelevant, top-of-funnel, or non-buying-intent queries isn’t valuable.

The common traps: irrelevant prompts and false comparisons

Two client examples from the episode illustrate this well:

  • A B2B software client saw a competitor showing up in AI overviews for a bunch of queries like “things to do in Illinois,” “most visited cities,” and city-specific travel guides. That competitor publishes consumer-facing content, so they naturally appeared in those AI prompts. But our B2B client sells software to vacation-related businesses—not consumer travel guides—so those AI mentions are largely meaningless.
  • A look at SEMrush’s AI brand performance demo for Warby Parker showed a “share of voice” percentage and dozens of specific queries. Some prompts made total sense (e.g., “Which retailers have the best customer reviews for eyewear?”) and mattered to Warby Parker. Other prompts—like “Who offers in-app virtual try-on for glasses?”—might be irrelevant or of very low commercial value.

Both examples show the same problem: tools give a big-picture metric without filtering for intent or business relevance. That metric can make leaders panic even when the brand is doing the right things for its customers.

Two rules that will save you from unnecessary panic

If you remember only two things from this article (and the episode), let them be these:

  1. Intent matters. Not all prompts are equal. A mention in a “what are fun things to do in Springfield” overview is not the same as ranking in an AI overview for “best property management software for vacation rentals.” Pick queries aligned to buying intent.
  2. SEO fundamentals still matter most. From the data referenced in the episode and our own observations, there’s a strong correlation between ranking in Google search results and showing up in AI overviews (including ChatGPT and Google’s AI responses). So prioritize the core things that make you rank well on Google.

How AI overviews actually decide what to show

The episode summarizes this neatly into two inputs that influence LLM answers like ChatGPT and Google AI overviews:

  • Training data: The LLM’s broad knowledge built from public datasets, books, podcasts, and web content. Getting into that training data is a long-term brand effort—centuries of marketing activities add up here.
  • Live web search: Many LLMs “search the web” when they don’t have enough internal information, and they use Google or other web sources. That makes your presence in current Google search results a very direct lever to influence AI answers.

Practically: focusing on appearing in top Google results (your domain or reputable third-party pages that mention you) is the most tangible way to influence whether AI mentions your brand.

A simple, practical framework you can implement today

Stop staring at a single “AI visibility” percentage and start controlling what matters. Here’s a step-by-step playbook I’d use right now if I were advising a marketing team with limited budget:

  1. Pick 5–10 core topics (not a thousand prompts). These should be the queries that directly indicate buying intent and align with your product. Examples: “prescription glasses online,” “equipment rental software,” “best content marketing agency for SaaS.” Keep them tight and product-focused.
  2. Map intent for each topic. Decide whether each topic is TOFU (top of funnel), MOFU, or BOFU (bottom of funnel) and what a successful outcome looks like: visits, demo signups, trial starts, or direct conversions.
  3. Audit your current rankings. For those 5–10 topics, track where your pages currently appear in Google. Do this monthly. You can use a paid tool or a simple spreadsheet with manual checks.
  4. Fix and optimize your pages. Update content, clarify intent, add conversion opportunities, and ensure pages answer the user’s question better than competitors. This is classic SEO content work—do it well.
  5. Earn placements on other relevant pages/lists. If other sites produce “best of” lists or roundups for these topics, get on them. Traditional PR outreach and relationship building still work here—email editors, share case studies, provide data, and be helpful.
  6. Monitor AI overviews for those topics, not your overall percentage. If you want, use an AI-tracking tool and focus reporting on those 5–10 queries rather than a single share-of-voice metric.
  7. Be wary of short-term “hacks.” Commenting across many Reddit threads, paying for placement, or other manipulative tactics might give transient wins. They’re not a substitute for sustainable SEO and product-driven marketing.

Examples of good vs. bad AI visibility efforts

From the episode, here’s how to classify potential activities:

  • Good: Updating core product pages and buying-intent content. This improves organic rankings and is likely to increase AI mentions in meaningful ways.
  • Good: Earning placements on authoritative lists that already rank well. That amplifies signals without being spammy.
  • Less useful: Trying to rank for dozens of irrelevant prompts the tool suggests. This wastes effort on topics that won’t convert.
  • Risky: Paying for placements that violate policies or trying to game AI algorithms with low-quality tactics. Short-term gains can become long-term penalties.

What about expensive AI visibility tools?

There are helpful tools out there that can automate monitoring and give you a big dashboard. But if you can’t justify the budget, you don’t need them to make progress. The hosts suggested a pragmatic alternative:

  • Pick your 5–10 priorities, build a simple spreadsheet, and check them periodically.
  • Have your team discuss status and actions every month. This focuses your efforts on topics that matter.

If you do decide to invest in a tool later, you’ll have clarity on which queries you care about and can ask the tool to monitor those specifically—rather than relying on whatever list it auto-generates.

Final takeaways — what I’m telling my team

If you’re feeling that uneasiness after opening an AI visibility report, here’s the friend-to-friend advice I’d give:

  • Don’t panic over a single share-of-voice number. It’s easy to misinterpret. Ask: which prompts contribute to that number, and do those prompts matter?
  • Pick a handful of meaningful queries and own them. Monitoring and optimizing 5–10 buying-intent topics is far more productive than chasing hundreds of irrelevant prompts.
  • Double down on SEO basics. Strong organic ranking signals are the most reliable way to influence AI outputs today. Create great content, earn links, and fix UX/conversion issues.
  • Use PR and list placements strategically. Getting on trusted lists that already appear in search results is a sensible, scalable tactic to increase the chances AI tools reference you.
  • Avoid reliance on hacky short-term tactics. They might work momentarily, but long-term brand and product strength wins.

“Do the basics. If you’re not doing the basics right now, then you’re going to have a lot harder time showing up in AI.” — Summarized from The Grow and Convert Marketing Show

How to get started this week (quick checklist)

  1. Pick 5–10 product-related topics with clear buying intent.
  2. Search each topic on Google and note the top 3–5 results.
  3. Audit your content for those topics—are you answering the searcher’s question? Is there a clear conversion path?
  4. Update or create the highest-value pages first (optimize for intent and conversions).
  5. Identify 3 external sites/lists where your brand should appear and start outreach.
  6. Set a monthly review to check rankings and AI overview presence for your chosen topics.

Closing thought

AI visibility is real and worth thinking about, but it’s not a magic replacement for SEO or product-driven growth. Focus on the queries that drive value, do the SEO fundamentals well, and use PR/list placements to complement your efforts. If you do that, the AI mentions will follow—without the panic and without wasting resources on irrelevant metrics.

If you want to dive deeper, follow The Grow and Convert Marketing Show for more breakdowns like the one I summarized—it’s a great resource for practical, no-nonsense marketing advice.

How are people using ChatGPT – research report summary

ChatGPT launched in November 2022. By July 2025, 18 billion messages were being sent each week by 700 million users, representing around 10% of the global adult population.

The first full research paper on its use was published today.

Non work messages now represent over 70% of all ChatGPT messages.

The share of messages related to computer coding is relatively small: only 4.2% of ChatGPT messages are related to computer programming, compared to 33% of work-related Claude conversations.

The share of messages related to companionship or social-emotional issues is fairly small: only 1.9% of ChatGPT messages are on the topic of Relationships and Personal Reflection.

The gender gap in ChatGPT usage has likely narrowed considerably over time, and may have closed completely.

AI and Affiliate Marketing: How I Build Content Engines That Actually Move the Business

I recently sat down with Mukund Mohan on his Seattle Side and the East Side podcast to walk through how I think about content, SEO, AI, and affiliate marketing in 2025. I wanted to capture that conversation here—what I’ve learned since my first job out of college at Kissmetrics, what I’d do differently if I were starting today, and the concrete playbook I use now to help B2B companies grow traffic, leads, and revenue.

Table of Contents

Two-minute backstory (so you know where I’m coming from)

My career began when Neil Patel hired me into a marketing role at Kissmetrics. I was an entry-level content marketer—writing blog posts and support documentation, shipping content, building a knowledge base. That blog became legendary. People loved the content so much they used to say they loved the Kissmetrics blog more than the product itself.

Since then I’ve led content teams across startups, founded my own company, and co-founded Stone Press—a B2B affiliate SEO company that scaled aggressively. We drove millions in revenue through SEO, inbound, conversion optimization, and building high-performing teams. We grew to a meaningful business, but platform shifts and core changes in Google’s ranking algorithm hit us hard and forced a pivot. Now I help clients stand up or revitalize their B2B content programs, bringing lessons from both the golden SEO era and the messy, AI-driven present.

What would I do differently if I were hired into a startup in 2025?

If I were starting today in a VC-backed analytics or SaaS company, my role would look familiar but with sharper structure and modern tooling. I’d still be focused on content: blog posts, landing pages, and support articles. But the workflows would leverage AI in tactical ways—especially for speed and iteration—while maintaining a human-first editorial lens.

The core, timeless playbook I still believe in is: write content that impacts people, ship consistently, and distribute through multiple channels. That means building personal brands around the work, maintaining an engaged email list, and ensuring content distribution isn’t dependent on a single platform.

Where 2025 is different is in tooling and distribution risk. AI can accelerate content ops—for ideation, outlines, first drafts, and even keyword research. But it’s also saturating channels with low-quality content. That makes authentic, human insight more visible. If you can create genuinely human content—insightful, narrow, opinionated—you stand out more now than ever because AI content has homogenized a lot of the signal.

Editorial vs SEO-first: When to choose which

There’s a temptation to swing between editorial magazine-style content and pure SEO-driven growth. My experience is that both approaches are valuable—and they can complement each other when done intentionally.

  • Editorial-first: Great for brand building, thought leadership, and multi-channel distribution (email, social, podcasts). You ship consistently and build a following.
  • SEO-first: Great for predictable, compounding traffic. Focused on bottom-of-funnel pages, evergreen informational content, and systematic updates.

In the past I leaned heavily into editorial and then swung too far into SEO because it delivered consistent ROI. My regret was not doubling down on SEO in one of my projects when it would’ve produced far larger returns. Today I typically architect a hybrid: establish a tight SEO foundation for bottom-of-funnel pages, then layer editorial content and multichannel distribution on top to diversify acquisition.

“Content that impacts people, shipping really consistently. Multi-channel distribution. Build personal brands and a tight email list—then tie everything into a flywheel.”

How I use AI in content operations (without letting it ruin the brand)

AI is a tool, not a replacement. I use it to accelerate research, generate outlines, and scale repetitious tasks (like support article first drafts). But I push back on purely AI-generated outputs being published as-is. The role AI plays is to free up human time for judgment, angle, and unique insight.

Some practical ways I use AI:

  1. Keyword discovery and clustering at scale to inform a content plan.
  2. Drafting outlines and iterating headlines rapidly so writers can focus on insight and voice.
  3. Creating summarized research and citation lists for complex topics to speed the subject-matter expert’s drafting.
  4. Automating repetitive support content generation, then reviewing and humanizing it before publishing.

That said, there’s a lot of low-effort AI content out there. You can spot it: a certain flatness in voice, generic examples, and predictable structure. If humans can create authentic content—even something simple like “hello, here’s our human perspective”—that authenticity stands out. The bar for “human” is lower now, because AI-created content often lacks personality.

B2B affiliate marketing—what it is, why it matters, and how I think about it

Affiliate marketing in B2B is underrated. It absolutely works for SaaS companies serving SMBs and mid-market customers. Enterprise is trickier because volumes are lower and buying cycles are longer, but for many SaaS categories affiliate is a significant channel.

The important shift is that affiliate programs do more than just drive direct commissions. They influence who gets mentioned across the web, which affects SEO signals, social buzz, and even the data LLMs learn from. The web is an ecosystem: publishers, review sites, listicles, and micro-influencers all create the signal that search engines and large language models consume. If your competitors are actively courting those publishers with affiliate incentives and you aren’t, you’re losing share of voice.

“If the number three company in the category is willing to pay and number one and two don’t, publishers will feature number three. That mention ripples into search, social, and LLM training data.”

How affiliate programs change discovery

Think of it this way: publishers and content creators monetize by recommending tools. If they can earn an affiliate commission for recommending Product C, they will—especially if Products A and B don’t have a competitive program. As those articles and listicles propagate, they feed into search signals and the datasets used by LLMs. Over time that visibility compounds, which is why affiliate programs are strategic, not just tactical.

Who to recruit as affiliates and how to reach them

I use a two-pronged approach: inbound + targeted outreach.

  • Inbound funnel: Maintain a clean signup flow and manage it actively. You’ll get high-volume, low-quality inbound, but occasionally a “golden goose” will sign up. Treat inbound as a lead channel and screen it.
  • Targeted outreach: Cold outreach to niche publishers, micro-influencers, and up-and-coming creators. Start small—don’t try to land the top-tier publications immediately. Work your way up the ladder.

The sweet spot is publishers who are hungry, have category-aligned audiences, and don’t already have lucrative deals with incumbents. Offer fair commissions, provide strong creative assets, and build relationships. Micro-influencers and niche YouTube channels are often easier wins than the big review sites early on.

When should you start an affiliate program?

My older view was: build SEO and product-market fit first, then bolt on affiliates. Now I often recommend launching an affiliate program earlier—especially if you’re entering an established category where mentions matter. If search and LLM outputs already favor incumbents, you need every lever available to flip mentions and shape narrative.

A practical sequencing I like:

  1. Get your SEO foundation in place: homepage, pricing page, competitor/alternatives pages, key vs/roundup posts.
  2. Run modest bottom-of-funnel paid ads if it makes sense to buy initial conversions and test messaging.
  3. Launch an affiliate program early enough to start getting mentions from niche publishers and micro-influencers.
  4. Layer on editorial and multi-channel distribution to build brand and email lists.

For SEO foundation, a small company only needs 15–30 high-quality pages to start: landing pages, pricing, comparison pages, a handful of evergreen posts. Those pages should be architected for conversion and updated regularly—ideally quarterly. Freshness matters now more than it used to.

Pricing pages and the unforgiving nature of discovery

One concrete example I shared on the podcast: I’ve audited companies where the pricing page had a noindex tag. That tag tells search engines to ignore the page entirely, meaning potential customers can’t easily find pricing via search. If users can’t discover pricing, conversions and organic visibility suffer. That shows how small technical issues can cripple discovery—especially when everything else seems “hot” and working around them.

Practical steps to launch an affiliate program that scales

Here’s a checklist I use when helping clients stand up affiliate programs:

  1. Decide pricing and commission structure aligned to LTV and margins.
  2. Choose an affiliate platform that integrates with your tracking and payout needs.
  3. Create dedicated affiliate landing pages and tracking links (UTMs) so you can attribute properly.
  4. Build an affiliate resource hub: creatives, copy snippets, demo videos, comparison charts, and onboarding guides.
  5. Start outreach to niche publishers and micro-influencers with tailored offers and co-marketing ideas.
  6. Monitor performance, optimize commissions and creative, and slowly trade up to higher-authority publishers.
  7. Integrate affiliates into broader BD and PR outreach—affiliate relationships often open doors for partnerships.

Wrap-up: diversify, humanize, and build durable systems

If there’s one through-line to everything I’ve learned, it’s this: don’t be hostage to one channel or one platform. Build a content engine that combines a solid SEO foundation, human editorial voice, smart use of AI to accelerate operations, and strategic affiliate programs to earn mentions across the web.

AI is a force multiplier when used correctly—but it’s not a substitute for real insight. Affiliate marketing is no longer just a growth add-on; in many categories it’s a strategic lever that shapes discovery and long-term visibility. And finally, update your core pages often, watch the technical fundamentals (like whether the pricing page is indexable), and keep your distribution channels diversified so one platform’s disruption won’t sink your growth.

If you’re a marketer, founder, or growth leader, focus on the foundation: homepage, pricing, key landing pages, and a compact list of priority posts. Then add affiliates early if you need to shape category conversation. Use AI to speed up the work, but keep humans in the driver’s seat for voice and angle. Do that, and you’ll build content that actually moves the business.

Final thought

I loved talking with Mukund about this—if you want to dive deeper on any of the tactics above (keyword clustering, affiliate compensation math, onboarding affiliates, or how to structure quarterly content refreshes), I’m happy to share templates and examples. Start small, iterate, and always ask: will this content or program still be driving value in six months? If not, rethink it.

Subscribe to my YouTube Channel


Subscribe