AEO 101Single source of truth on AEO · Updated May 14, 2026Read it

For developer tools, SDKs, and platforms

Win the line of code where the AI assistant picks a default.

Developers do not Google for tools anymore. They ask Claude Code, Cursor, v0, and Copilot, and the assistant decides what gets installed.

§01 How the new buying funnel actually works

A single prompt fires. A specific corpus gets retrieved. The default ends up in the import line.

01Developer prompt
>_
scaffold me a saas with auth, postgres, and deploy on vercel
02Agent retrieval fanout
llms-full.txtsupabase.com
llms.txtvercel.com
agent-skillsupabase/agent-skills
mcpsupabase mcp server
threadreddit.com/r/nextjs
03Generated import line
// app/page.tsx
import { createClient } from "@supabase/supabase-js"
$ pnpm add
@vercel/analytics
$ vercel deploy

Every step in this loop is engineered. The companies named on the right shipped the corpus on the middle panel.

The agent did not pick those names because of brand awareness. It picked them because the corpus it pulled at retrieval time named them clearly, in a structure the model could lift into a response. That corpus is the actual product we manage.

§02 The discovery funnel collapsed

Six steps became two. The middle disappeared.

Pre-agent funnel2023
  1. Developer Googles 'best vector DB for RAG'
  2. Opens 5 to 8 tabs
  3. Reads 2 listicles and a comparison post
  4. Cross-checks pricing + Reddit threads
  5. Picks a winner
  6. Installs

6 steps

Agent funnel2026
  1. Developer asks Cursor or Claude Code to scaffold the app
  2. Agent picks a default in the import line
  3. Installs

3 steps

The middle of the funnel collapsed. The companies that win the new shape are the ones whose names land in step two automatically.

The comparison post, the listicle, and the Reddit deep-dive still exist, but the developer no longer reads them in sequence. The agent reads them, condenses, and decides. Your job is to be inside what the agent reads.

§03 The retrieval surface stack

Five layers. Ship one or two and you become retrievable. Ship all five and you become the default.

The retrieval surface stack

What we ship, top to bottom

05Third-party retrieval poolChatGPT · Perplexity · GeminiReddit threads, dev.to tutorials, HN comments, GitHub gists, category-tutorial sites. The corpus assistants fan out to before they ever touch your docs.
04Agent Skills packageClaude Code · Codex · Cursor · CopilotInstallable instructions that teach the agent how your primitives compose. Distributed via the Agent Skills Open Standard; pinned to a version.
03MCP serverClaude Code · Cursor · Claude appsRuntime bridge. Lets the agent list projects, run queries, deploy functions, mutate state. The skill makes the agent reach for you; MCP lets it finish the job.
02llms.txt · llms-full.txtAll retrieval-augmented agentsSlim index for real-time lookup. Full Markdown export for ingestion pipelines. Curated by hand, not auto-generated from sitemap.
01Docs IA refactored for passage extractionEvery assistant, every surfaceDirect one-sentence answers at the top of every page. Copy-as-Markdown. Ask ChatGPT / Ask Claude buttons inline. Code samples that actually run on the latest minor version.

Ship one or two layers and you become retrievable. Ship all five and you become the default. Vercel and Supabase shipped all five.

§04 Receipts: Vercel and Supabase

Neither was the obvious category winner before agents picked the defaults. Both engineered every layer.

Vercel

Hosting · frontend platform

Docs IA for passage extraction

Markdown-first docs, direct retrieval at /docs/llms-full.txt

llms.txt + llms-full.txt

Both shipped at vercel.com/docs and at ai-sdk.dev/llms.txt

MCP server

AI SDK plus first-party MCP integrations in the platform

Agent Skills package

Public vercel-labs/agent-skills repository

Third-party retrieval density

Reddit r/nextjs and r/vercel, dev.to, HN, GitHub examples

Measured outcome

ChatGPT now drives roughly 10 percent of new Vercel signups, per Vercel's own engineering blog.

Supabase

Open-source backend on Postgres

Docs IA for passage extraction

Copy as Markdown plus Ask ChatGPT / Ask Claude inline on every doc page

llms.txt + llms-full.txt

Both shipped at supabase.com/docs, refreshed continuously

MCP server

Official supabase MCP server, live access to Postgres, auth, storage, edge functions

Agent Skills package

supabase/agent-skills installable in one command across Claude Code, Codex, Cursor, Copilot

Third-party retrieval density

Reddit r/Supabase, dev.to, HN, GitHub examples, video tutorials

Measured outcome

Independent 2026 analyses report Claude prefers Supabase for any prompt that combines auth, Postgres, and a frontend framework.

The lesson is not to copy Vercel or Supabase. The lesson is the shape of the work. The stack is now standard infrastructure for any dev-tool company that wants to be the named default in an agent-composed app.

§05 The metric that decides the category

Citation share on one specific prompt, broken out by surface.

Citation share visualisation

Prompt: best Postgres host for a Next.js app

Category defaultChallengerLong tail
Claude Code62% · 14% · 24%
Supabase
Neon
Cursor57% · 19% · 24%
Supabase
Neon
ChatGPT48% · 22% · 30%
Supabase
Neon
Perplexity41% · 26% · 33%
Supabase
Neon
Copilot38% · 24% · 38%
Supabase
Neon

Illustrative shares from a representative prompt audit. Real engagements run 120 to 250 prompts weekly, with named-platform deltas reported by surface.

Every prompt has a citation distribution. Every category has a working set of prompts that decide adoption. The job is to know your distribution, name the competitors, and move the bars.

§06 What we ship for dev-tool companies

Six lines of work. Owned end to end. DX-lead approved.

01

Prompt-set audit on the questions developers ask agents

120 to 250 prompts a working developer types into Claude Code, Cursor, v0, ChatGPT, and Copilot in your category. The audit shows where you appear, where a competitor wins, and where the answer pool has not chosen yet.

02

llms.txt and llms-full.txt engineered for retrieval

Slim index for real-time assistants, full Markdown export for ingestion pipelines. Curated by hand. Headings line up with how developers ask, not how your IA chart was organised.

03

MCP server and Agent Skills, authored and shipped

We write the MCP server when it is in scope and the Agent Skills package that installs in one command across Claude Code, Codex, Cursor, and Copilot. Both versioned, public, tracked.

04

Docs IA refactor for passage-grade extraction

One-sentence direct answer at the top of every page. Copy-as-Markdown. Ask ChatGPT and Ask Claude inline. Code samples run on the latest minor version. Your DX lead signs off before publish.

05

Third-party retrieval surfaces

Reddit, dev.to, HN, GitHub gists, the tutorial sites that ChatGPT and Perplexity fan out to in your category. Content that holds up to engineer-level scrutiny.

06

In-assistant citation tracking, not just answer-engine tracking

Citation share inside ChatGPT, Claude, Gemini, Perplexity, Copilot. Plus the harder signal: recommendation share inside the generated import line.

§07 The pilot

Named results, written into the engagement letter.

90-day pilot

€500 per month on tools and APIs while we run the work. €6,000 success fee plus €2,500 per month retainer only if we hit the goal we agreed on day one.

Miss the goal and you walk. No further obligation. The success metric is in the engagement letter before we start.

Goal

Citation share or recommendation share on a fixed prompt set, by named surface

Cadence

Weekly prompt-set re-runs across ChatGPT, Claude, Gemini, Perplexity, Copilot

Deliverables

llms.txt, llms-full.txt, MCP server scope, Agent Skills package, docs refactor

§09 FAQ

The questions dev-tool leadership teams ask before they engage.

Why does AI visibility matter more for dev tools than for other categories?
Because the buyer asks an AI assistant before they ever read your docs. Developers used to choose a database, framework, or SDK by Googling and reading comparison posts. In 2026 they ask Claude Code, Cursor, v0, or ChatGPT to scaffold the app, and the assistant picks the default. Vercel has publicly said ChatGPT drives around ten percent of new signups, which is a measurable share of new pipeline that does not exist in any other category at this scale.
How did Vercel and Supabase become the AI assistant defaults?
They turned their docs into a retrieval surface, not a ranking surface. Both ship slim llms.txt indexes plus full llms-full.txt exports. Supabase added an official MCP server with live access to Postgres and auth, an Agent Skills package compatible with Claude Code, Codex, Cursor, and Copilot, and direct Ask ChatGPT and Ask Claude buttons on every doc page. When an assistant fans out a query like next.js auth or vector store for rag, these two products are inside the context window first.
Is llms.txt actually a ranking factor or is it marketing theatre?
It is not a ranking factor in Google search; Google has said so publicly. It is a retrieval factor for AI assistants, which is a different thing. Cursor, Claude Code, and Copilot fetch documentation in real time when a developer asks a product-specific question. A clean llms.txt lowers the token cost of that retrieval and raises the accuracy of what comes back. We treat it as a tactical asset, not a silver bullet.
What is the difference between an MCP server and an Agent Skill?
An MCP server is a runtime bridge. It lets the agent list projects, run a query against the database, or deploy a function while the developer is mid-prompt. An Agent Skill is a packaged instruction set that teaches the agent how your product is meant to be used. The skill makes the agent reach for you; the MCP server lets the agent finish the job. For a dev-tool company chasing AI visibility, both are standard infrastructure now.
How do you measure success when the developer never lands on our site?
Citation share and recommendation share inside the assistant are the primary metrics, measured on a curated prompt set of 120 to 250 questions. We pair that with secondary signals that do reach your site: direct signups attributed to AI referrers, MCP install events, and Agent Skill install counts. The pilot success metric is written into the engagement letter on day one.

Ready to become the answer AI gives?

Book a 30-minute discovery call. We'll show you what AI says about your brand today. No pitch. Just data.