For developer tools, SDKs, and platforms
Win the line of code where the AI assistant picks a default.
Developers do not Google for tools anymore. They ask Claude Code, Cursor, v0, and Copilot, and the assistant decides what gets installed.
§01 How the new buying funnel actually works
A single prompt fires. A specific corpus gets retrieved. The default ends up in the import line.
Every step in this loop is engineered. The companies named on the right shipped the corpus on the middle panel.
The agent did not pick those names because of brand awareness. It picked them because the corpus it pulled at retrieval time named them clearly, in a structure the model could lift into a response. That corpus is the actual product we manage.
§02 The discovery funnel collapsed
Six steps became two. The middle disappeared.
- Developer Googles 'best vector DB for RAG'
- Opens 5 to 8 tabs
- Reads 2 listicles and a comparison post
- Cross-checks pricing + Reddit threads
- Picks a winner
- Installs
6 steps
- Developer asks Cursor or Claude Code to scaffold the app
- Agent picks a default in the import line
- Installs
3 steps
The middle of the funnel collapsed. The companies that win the new shape are the ones whose names land in step two automatically.
The comparison post, the listicle, and the Reddit deep-dive still exist, but the developer no longer reads them in sequence. The agent reads them, condenses, and decides. Your job is to be inside what the agent reads.
§03 The retrieval surface stack
Five layers. Ship one or two and you become retrievable. Ship all five and you become the default.
The retrieval surface stack
What we ship, top to bottom
Ship one or two layers and you become retrievable. Ship all five and you become the default. Vercel and Supabase shipped all five.
§04 Receipts: Vercel and Supabase
Neither was the obvious category winner before agents picked the defaults. Both engineered every layer.
Vercel
Hosting · frontend platform
Docs IA for passage extraction
Markdown-first docs, direct retrieval at /docs/llms-full.txt
llms.txt + llms-full.txt
Both shipped at vercel.com/docs and at ai-sdk.dev/llms.txt
MCP server
AI SDK plus first-party MCP integrations in the platform
Agent Skills package
Public vercel-labs/agent-skills repository
Third-party retrieval density
Reddit r/nextjs and r/vercel, dev.to, HN, GitHub examples
Measured outcome
ChatGPT now drives roughly 10 percent of new Vercel signups, per Vercel's own engineering blog.
Supabase
Open-source backend on Postgres
Docs IA for passage extraction
Copy as Markdown plus Ask ChatGPT / Ask Claude inline on every doc page
llms.txt + llms-full.txt
Both shipped at supabase.com/docs, refreshed continuously
MCP server
Official supabase MCP server, live access to Postgres, auth, storage, edge functions
Agent Skills package
supabase/agent-skills installable in one command across Claude Code, Codex, Cursor, Copilot
Third-party retrieval density
Reddit r/Supabase, dev.to, HN, GitHub examples, video tutorials
Measured outcome
Independent 2026 analyses report Claude prefers Supabase for any prompt that combines auth, Postgres, and a frontend framework.
The lesson is not to copy Vercel or Supabase. The lesson is the shape of the work. The stack is now standard infrastructure for any dev-tool company that wants to be the named default in an agent-composed app.
§05 The metric that decides the category
Citation share on one specific prompt, broken out by surface.
Citation share visualisation
Prompt: best Postgres host for a Next.js app
Illustrative shares from a representative prompt audit. Real engagements run 120 to 250 prompts weekly, with named-platform deltas reported by surface.
Every prompt has a citation distribution. Every category has a working set of prompts that decide adoption. The job is to know your distribution, name the competitors, and move the bars.
§06 What we ship for dev-tool companies
Six lines of work. Owned end to end. DX-lead approved.
01
Prompt-set audit on the questions developers ask agents
120 to 250 prompts a working developer types into Claude Code, Cursor, v0, ChatGPT, and Copilot in your category. The audit shows where you appear, where a competitor wins, and where the answer pool has not chosen yet.
02
llms.txt and llms-full.txt engineered for retrieval
Slim index for real-time assistants, full Markdown export for ingestion pipelines. Curated by hand. Headings line up with how developers ask, not how your IA chart was organised.
03
MCP server and Agent Skills, authored and shipped
We write the MCP server when it is in scope and the Agent Skills package that installs in one command across Claude Code, Codex, Cursor, and Copilot. Both versioned, public, tracked.
04
Docs IA refactor for passage-grade extraction
One-sentence direct answer at the top of every page. Copy-as-Markdown. Ask ChatGPT and Ask Claude inline. Code samples run on the latest minor version. Your DX lead signs off before publish.
05
Third-party retrieval surfaces
Reddit, dev.to, HN, GitHub gists, the tutorial sites that ChatGPT and Perplexity fan out to in your category. Content that holds up to engineer-level scrutiny.
06
In-assistant citation tracking, not just answer-engine tracking
Citation share inside ChatGPT, Claude, Gemini, Perplexity, Copilot. Plus the harder signal: recommendation share inside the generated import line.
§07 The pilot
Named results, written into the engagement letter.
90-day pilot
€500 per month on tools and APIs while we run the work. €6,000 success fee plus €2,500 per month retainer only if we hit the goal we agreed on day one.
Miss the goal and you walk. No further obligation. The success metric is in the engagement letter before we start.
Goal
Citation share or recommendation share on a fixed prompt set, by named surface
Cadence
Weekly prompt-set re-runs across ChatGPT, Claude, Gemini, Perplexity, Copilot
Deliverables
llms.txt, llms-full.txt, MCP server scope, Agent Skills package, docs refactor
§08 The methodology is public
One framework, applied weekly. Research, playbook, and engineering ledger all open.
§09 FAQ
The questions dev-tool leadership teams ask before they engage.
Why does AI visibility matter more for dev tools than for other categories?
How did Vercel and Supabase become the AI assistant defaults?
Is llms.txt actually a ranking factor or is it marketing theatre?
What is the difference between an MCP server and an Agent Skill?
How do you measure success when the developer never lands on our site?
Ready to become the answer AI gives?
Book a 30-minute discovery call. We'll show you what AI says about your brand today. No pitch. Just data.