GEO Strategy10 min read

Google Just Defined a Second AEO. Here's What 'Agentic Engine Optimization' Means for Your Content.

CS

Cite Solutions

Research · April 18, 2026

AEO takeaway

Key takeaway for AEO optimization

Make every important page easier for answer engines to quote, trust, and reuse.

01

Key move

Lead each section with a direct answer block before expanding into detail.

02

Key move

Put evidence close to the claim so AI systems can extract support cleanly.

03

Key move

Use schema and strong information architecture to improve eligibility, not as a gimmick.

On April 11, Addy Osmani, Director of Engineering at Google Cloud AI, published a formal framework for a category he calls Agentic Engine Optimization.

The acronym is identical to Answer Engine Optimization. The discipline is different.

Answer Engine Optimization (the one most B2B marketing teams have been reading about for six months) is about getting your brand cited when a human asks ChatGPT, Perplexity, or Gemini for a recommendation. Agentic Engine Optimization is about making your content usable by AI agents that are not browsing on behalf of a human reading the output. They are completing a task.

That distinction matters because the optimizations pull in different directions. A page built to win human-facing citations needs strong opening hooks, branded authority signals, and reviewed social proof. A page built for an AI agent needs fewer tokens, cleaner markdown, and a machine-readable capability signal at the top. Osmani's framework is the first time a major platform-affiliated engineer has formally defined what agent-facing content looks like, and he released an open-source audit tool to check for the signals.

This post covers what the framework actually says, where it differs from standard answer engine optimization, and what B2B SaaS teams with documentation, knowledge bases, or developer-facing content should do about it now.

Why a Google engineer is defining this now

LLM bots now crawl the web 3.6x more frequently than Googlebot, per Position Digital's April 2026 data. ChatGPT agents abandon pages immediately 63% of the time, extracting no meaningful content before leaving. That combination, high crawl volume paired with high abandonment, means the technical experience agents have on a page now filters visibility before content quality matters at all.

Osmani frames the definition directly: "Agentic Engine Optimization means structuring and serving technical content so AI agents can use it, not just render it."

The useful distinction buried in that sentence is "use, not just render." A human reader needs a page to load, look clean, and lead with something compelling. An AI agent needs the extractable answer, ideally in the first 500 tokens, with clearly structured metadata about what else is available on the page.

Osmani writes that "agents have limited patience for preamble." That is a polite way of saying that if your answer is on line 140 of a 400-line page, the agent will not find it.

The 5 pillars of Agentic Engine Optimization

Agentic Engine Optimization — 5 pillars

Addy Osmani's framework for optimizing content for AI agents

Published April 11, 2026. Director of Engineering, Google Cloud AI.

01

Discoverability

llms.txt as a discovery layer. AGENTS.md for agent-facing capability signaling before full context is consumed.

02

Parsability

Serve clean markdown over HTML. Agents parse structured text faster than rendered pages.

03

Token efficiency

Quick starts ≤15,000 tokens. Conceptual guides ≤20,000. API references ≤25,000. Expose token counts publicly.

04

Capability signaling

skill.md or AGENTS.md files that tell agents what your content can do before they commit context to reading it.

05

Access control

robots.txt directives and rate limits that manage which agents can access what, and how often.

Source: Addy Osmani, "Agentic Engine Optimization" (April 11, 2026) · open-source audit tool: github.com/addyosmani/agentic-seo

Each pillar addresses a distinct failure mode Osmani has observed in agent interactions with documentation and knowledge content.

Discoverability is the agent version of the "can they find it" question. For traditional SEO, sitemaps and internal links handled discoverability. For agents, Osmani recommends three file-based discovery layers: llms.txt at your domain root, AGENTS.md for agent-specific capability information, and skill.md for signaling specific capabilities an agent can invoke. Search volume for "llms.txt" now sits at 5,400 per month, with "agents.md" at 4,400 in March 2026, per DataForSEO. These are live searches, not theoretical concerns.

Parsability is about serving agents clean markdown rather than rendered HTML. An agent consuming your documentation processes markdown faster than a DOM tree, extracts structured content more reliably, and spends fewer tokens on layout wrapper content that has no semantic value. Osmani recommends exposing a markdown version of your docs pages, typically with a .md URL suffix or content negotiation via Accept: text/markdown headers.

Token efficiency is the pillar most specific to agents. Osmani gives concrete budgets: quick-start guides under 15,000 tokens, conceptual guides under 20,000, API references under 25,000. The reasoning: agents operate within constrained context windows, and content that exceeds these budgets risks being truncated, chunked poorly, or skipped entirely. He recommends exposing token counts publicly on pages so agents can decide before loading whether the content fits their remaining context budget.

Capability signaling covers communicating what your content can do before the agent commits context budget to reading it. An AGENTS.md or skill.md file declares, upfront, what kinds of tasks your content supports. For SaaS documentation, that could mean listing which integrations you cover, which API endpoints are documented, and which deployment targets are supported. The agent reads the signal, decides the content is relevant, and commits token budget intentionally.

Access control is the governance pillar. Which agents get to crawl which content, at what rate, and with what authorization. Osmani frames this through robots.txt directives, rate limit headers, and authentication gates for premium content. This matters because LLM bots now crawl 3.6x more than Googlebot. Without access control, documentation sites absorb more bandwidth from agent traffic than from human visitors.

Find out how AI agents see your content today

We run agentic crawl audits alongside traditional citation tracking. Most B2B SaaS docs sites have at least three of the five agentic AEO pillars unaddressed. Our audit shows you which ones and what to fix first.

Book a Discovery Call

The Google internal contradiction

The most important caveat in Osmani's article is what it says about Google's own position.

Osmani writes that Google Search does not use llms.txt or recommend separate Markdown pages for LLMs. John Mueller from Google Search has confirmed this directly. This means the framework is not Google Search guidance. It is agentic workflow guidance from Google Cloud, specifically aimed at AI agent interactions.

That distinction matters for how B2B SaaS content teams should interpret the advice. If you are optimizing a marketing page to appear in Google AI Overviews, llms.txt is not doing citation work. SE Ranking's 300,000-domain study found null citation impact from llms.txt implementation. Osmani is not arguing against that finding. He is addressing a different layer: agent crawl and task completion, not citation frequency.

Two different problems, two different content structures. A marketing blog post that wins citations in ChatGPT's answer stream does not need to be under 15,000 tokens. An API reference that needs to be usable by a coding agent does. Most B2B SaaS companies have both kinds of content and have not yet separated the optimization approaches.

Where Answer Engine Optimization and Agentic Engine Optimization diverge

Both share the AEO acronym. The optimizations work against each other in several places.

DimensionAnswer Engine OptimizationAgentic Engine Optimization
Primary readerHuman (via AI synthesis)AI agent completing a task
Success metricCitation frequency, Share of ModelTask completion rate, token efficiency
Content lengthDepth wins (2,000-5,000 words)Token budget caps (15,000-25,000 tokens)
Format preferenceRich HTML, schema markup, imagesClean markdown, no layout chrome
Critical filesitemap.xml, schema JSON-LDllms.txt, AGENTS.md, skill.md
First priorityAuthority signals in opening paragraphsAnswer in first 500 tokens
Bot targetGPTBot, ClaudeBot, PerplexityBot (content crawl)Coding agents, workflow agents, task runners

This is not a conflict every team needs to solve today. But documentation-heavy B2B SaaS companies, developer platforms, and API-first products do need to separate the tracks now. Marketing blog optimization and developer docs optimization used to be two versions of the same SEO playbook. They are becoming two playbooks.

What B2B SaaS teams should act on now

For teams without agent-facing content (pure marketing sites, commercial landing pages, brand content), the existing Answer Engine Optimization playbook still applies. Nothing in Osmani's framework changes the citation-based GEO approach for brand visibility work.

For teams with developer documentation, product docs, knowledge bases, or any content that AI coding assistants or workflow agents might consume, three actions are genuinely worth doing now:

Run the agentic-seo audit tool. Osmani released an open-source audit tool on GitHub that checks for the five pillars across a given URL set. It does not require a paid platform. The output identifies which of your pages already meet the signals and which ones are missing them. For most B2B SaaS teams, this is the fastest way to get a current-state baseline.

Add a markdown version of your docs. The simplest high-impact change is making markdown versions of documentation pages available via content negotiation or .md URL suffixes. Docusaurus, MkDocs, and other modern doc frameworks already support this. The work is usually configuration, not rewriting. Agents extracting content from your pages now get the clean structured version instead of DOM-wrapped HTML.

Write an AGENTS.md for your docs root. This is the single new artifact Osmani argues most strongly for. An AGENTS.md at your docs root tells agents what your documentation covers (which products, which APIs, which integrations), what tasks it supports (authentication setup, deployment, error handling), and where the authoritative references live. Agents read this before committing context to reading your content. For B2B SaaS platforms where developer agents are increasingly the primary docs reader, this is becoming baseline infrastructure.

For teams that want a fuller implementation path, the llms.txt explainer covers the discovery layer, and our GEO crawlability audit framework covers the robots.txt and access control pieces. Osmani's framework adds token efficiency and capability signaling as new dimensions on top of that technical foundation.

Why this matters for the citation layer too

Even for teams focused purely on answer engine optimization for marketing content, the Osmani framework has a secondary read.

Position Digital's April 2026 data showed that early-discovery content with 5 to 7 statistics earns a 20% higher citation likelihood. Pages that front-load concrete data inside the first 500 tokens are already the most-cited tier. Osmani's agent-facing recommendation (front-load the answer in the first 500 tokens) maps directly onto what already works for citation-oriented content.

Put differently: the agentic AEO recommendation and the answer engine AEO recommendation converge on the same opening-paragraph discipline. What differs is everything after the first 500 tokens. For human-facing content, depth, authority, and richness win. For agent-facing content, brevity wins past that point.

Teams that get the first 500 tokens right with concrete data, clear answers, and structured metadata will do well on both tracks. Teams that build for human readers first and never think about the agent pathway will find their documentation increasingly bypassed as agents become the primary way developers interact with B2B SaaS products.

FAQ

What is Agentic Engine Optimization?

Agentic Engine Optimization is a framework published on April 11, 2026 by Addy Osmani, Director of Engineering at Google Cloud AI. It defines how to structure technical content so AI agents can use it to complete tasks, not just render it for human readers. The framework has five pillars: discoverability, parsability, token efficiency, capability signaling, and access control. It shares the acronym AEO with Answer Engine Optimization but addresses a different problem: making content consumable by autonomous agents rather than cited in AI answers.

How is Agentic Engine Optimization different from Answer Engine Optimization?

Answer Engine Optimization targets citations in AI-generated answers that humans read. Agentic Engine Optimization targets AI agents completing tasks without a human reviewing the output. Answer engine content typically wins with depth (2,000 to 5,000 words), authority signals, and rich formatting. Agent content wins with token efficiency (under 15,000 to 25,000 tokens depending on document type), clean markdown, and capability signaling files like llms.txt and AGENTS.md. Both can apply to the same brand but to different content types.

Does Google actually recommend llms.txt?

Google Cloud AI's Director of Engineering, Addy Osmani, recommends llms.txt for agentic workflows. Google Search, via John Mueller, does not use llms.txt for ranking and does not recommend separate markdown pages for LLMs. SE Ranking's 300,000-domain study also found null citation impact from llms.txt implementation. Osmani's recommendation is specific to agent crawl and task completion, not citation frequency in Google AI Overviews.

What is an AGENTS.md file?

AGENTS.md is a file Osmani recommends placing at the root of documentation sites to signal capabilities and scope to AI agents before they consume full content. It typically lists what products the documentation covers, which tasks are supported, and where authoritative reference content lives. The goal is to let an agent decide whether to commit context budget to reading your content. Search volume for "agents.md" has grown to 4,400 per month as of March 2026, per DataForSEO, indicating active implementation interest among developer teams.

What are the token length targets in Osmani's framework?

Osmani recommends three specific token budgets: quick-start guides under 15,000 tokens, conceptual guides under 20,000 tokens, and individual API references under 25,000 tokens. The reasoning is that AI agents operate within constrained context windows. Content that exceeds these budgets risks being truncated, chunked poorly, or skipped entirely. He also recommends exposing token counts publicly on pages so agents can decide before loading whether the content fits their remaining context budget.

Should B2B SaaS marketing teams implement Agentic Engine Optimization?

For pure marketing content (landing pages, blog posts, brand pages), the traditional Answer Engine Optimization playbook still applies and Agentic AEO adds little. For teams with developer documentation, product docs, knowledge bases, or API references, the framework is directly relevant because AI coding assistants and workflow agents are increasingly the primary readers of that content. The minimum actions for documentation-heavy teams: run the agentic-seo audit tool, add markdown versions of docs pages, and write an AGENTS.md file at the docs root.

Two AEOs, two playbooks

The most practical takeaway from Osmani's framework is taxonomic, not tactical.

Most B2B SaaS content teams have been treating AI visibility as one problem. The reality is that it has already split into two. Citation-based Answer Engine Optimization for marketing content, and agent-facing Agentic Engine Optimization for documentation, APIs, and developer-facing content. Same acronym, different work, different measurement, different stakeholders inside the company.

Teams that recognize the split early will build the right content for each track. Teams that do not will end up with marketing pages that work for ChatGPT citations and documentation that agents skip past without extracting anything useful. The second failure mode is quiet. You never get a traffic alert for content that an agent chose not to read.

Osmani gave the industry a name for the second discipline and an open-source tool to check for it. The first round of auditing across the B2B SaaS content teams we work with is already showing that most sites are failing three or more of the five pillars. The fix is rarely expensive. It is almost always something teams did not know to look for.

Audit your content for both AEO disciplines

We run Answer Engine Optimization audits for citation visibility and Agentic Engine Optimization audits for agent accessibility. Most B2B SaaS clients discover gaps in one or both. Our report shows which track needs work first.

Get Your AI Visibility Audit

Ready to become the answer AI gives?

Book a 30-minute discovery call. We'll show you what AI says about your brand today. No pitch. Just data.