AEO 101Single source of truth on AEO · Updated May 13, 2026Read it

Living playbook · Updated daily

AEO 101 · the playbook that ships what is working in answer engine optimization, right now.

The canonical real-time reference for AEO. Curated from our research library and the actual tactics we are running on engagements this week. Refreshed every morning.

Last revised May 13, 2026·13 tactics tracked·6 platform updates this week

§00 What you are reading

Compiled from our daily research of the AEO space.

Every morning our research team logs what shifted in the answer engine ecosystem the day before. Platform updates from ChatGPT, Claude, Gemini, Perplexity, and Google AI Overviews. New citation behaviors. Source-pool drift on the prompts our clients care about. Tactics that stopped working. Tactics that started working.

The page below is the synthesis. Not a static guide written once and forgotten. The platform deltas are dated from the last 14 days. The tactics are pulled from briefs our team filed this month. The deprecations are the things AEO operators used to recommend that answer engines no longer reward.

The point is simple. If you want to be the answer AI gives, your strategy has to update at the speed AI changes. Every change in the AEO space affects which brands get cited, which pages get extracted, and which sources AI trusts. This page tracks those changes and tells you what to do about them.

Platforms monitored

6

Update cadence

Weekly

Tactics on the page

13

Tactics deprecated

6

§01 What is AEO

The 60-second answer.

Answer engine optimization is the work of getting your brand named, cited, and recommended inside the answers that AI systems generate. The unit of value is no longer a ranked page. It is a piece of groundable information with clear provenance that ChatGPT, Claude, Gemini, Perplexity, and Google AI Overviews can responsibly reuse when they construct a reply.

AEO and traditional SEO share infrastructure but optimize for different outcomes. SEO asks which pages a user should visit. AEO asks what information an AI system can responsibly use to construct an answer. A page can rank well in Google and still never be cited inside an AI answer because the answer-grade evidence is buried, the schema is wrong, or the surrounding source pool quotes a stronger third party.

The operating discipline has three jobs. Engineer your own pages for passage extraction and clean retrieval. Influence the third-party pool that AI systems already cite for your category, including Reddit, G2, analyst roundups, and the comparison sites that AI fans out to. Measure citation share and recommendation rate on a fixed prompt set across every major surface, and respond inside seven days when something drifts.

This page is the live playbook. It is updated daily from our research library, our daily intel, and the actual tactics we are running on engagements this week. Dated entries you can copy. Deprecated tactics you should stop running. Platform shifts logged the day we see them.

§02 What changed this week

Platform shifts we logged in the last 14 days.

Strategy follows the platform. Every entry below is a real change one of the answer engines made, and the operator response we are running on engagements this week.

  • May 13, 2026ChatGPT

    ChatGPT shopping fanouts now inject review modifiers even when the original prompt did not request them

    Make sure G2 / Reddit / analyst-roundup coverage is fresh on every commercial cluster you optimize for

  • May 08, 2026Cross-platform

    Microsoft Bing formalized the retrieval shift in writing: traditional indexing asks which pages a user should visit, grounding asks what information an AI system can responsibly use to construct an answer. The unit of value moved from documents to groundable information with clear provenance.

    Stop talking to clients about ranking pages alone. Build out answer-grade evidence blocks with explicit source attribution next to every claim that you want quoted.

  • May 07, 2026ChatGPT

    Peec AI analysed 5 million query fanouts between April 1 and April 21. ChatGPT averages 2.1 fanouts per prompt, Perplexity 1.4, and Grok 6.8. ChatGPT silently injects modifiers like best, top, comparison, reviews, tools, software, and features into fanout queries even when the user did not.

    Plan for the hidden comparison and review fanouts. Get into the third-party comparison surfaces, review pages, and Reddit threads your category already lives in, not just your own site.

  • May 06, 2026Google AI Overviews

    Google shipped five AI Search updates: subscription-labelled links, community and firsthand-source previews, more inline links placed next to relevant text, hover previews of source context, and Explore new angles follow-on links. Early testing showed users were significantly more likely to click subscription-labelled sources.

    Citation share is no longer the whole game. The work is now citation presentation: how your source looks inside the answer, what label it carries, and whether the preview earns the click.

  • May 06, 2026Perplexity

    Profound fanout study: ChatGPT keeps 91 percent of its search queries unique across runs, Copilot 47 percent, Perplexity only 14 percent. Perplexity preserves product and shopping query intent on-topic 74 percent of the time. Copilot drops to 43 percent on lifestyle intent.

    Match the prompt set to the platform. Perplexity tracking can rely on a tighter, keyword-shaped list. ChatGPT tracking needs broader recurring coverage because the fanout queries keep changing.

  • May 04, 2026Gemini

    Alphabet earnings remarks confirmed AI Mode and AI Overviews are increasing total search usage rather than cannibalizing it. Search and other advertising revenue grew 19 percent. Core AI response cost is down more than 30 percent since the Gemini 3 upgrade. Search Live is now live globally.

    Treat AEO as part of the main search budget, not a side experiment. Plan for AI Mode, AI Overviews, and Search Live coverage in the same operating model as classic Google.

§03 Tactics working right now

13 tactics our team is running on live engagements this month.

Dated. Sourced. Replicable. Each card lists the date we validated the tactic, the category it belongs to, and the concrete steps to ship it.

ContentMay 08, 2026

Use case and workflow pages tied to a real job-to-be-done

Build buyer-stage pages that answer can this product solve my exact scenario, for my exact team, with my exact operating constraints.

How

Map the prompt family for each job-to-be-done. Write the page around workflow fit, named role, and operating context, not feature lists. Include a copy-grade summary block at the top, then proof and routing links to pricing, implementation, and security pages.

Source-poolMay 08, 2026

GEO content update loop across the buyer-stage cluster

Turn one approved business change into a controlled update across pricing, implementation, support, integrations, trust, and FAQ pages before AI quotes the wrong version.

How

Run a six-step loop: log the source change, map impacted prompt families, sweep the page cluster, ship an update packet with owners, retest prompts on day one and day seven, and rollback if drift hits a money prompt.

MeasurementMay 08, 2026

Separate ranking-page work from answer-grounding work

AI search has split into two optimization problems. Build different systems for each one.

How

Keep your existing SEO program for page ranking. Add a parallel grounding program that owns answer blocks, provenance markers, third-party corroboration, and abstention guidance for prompts where the evidence is too weak to support a strong claim.

ContentMay 07, 2026

Support and SLA pages built for evaluation prompts

Vague support copy fails the procurement prompt family. Build pages that answer what happens when something breaks, who responds, and what service commitments are real.

How

Document response times, escalation paths, channels, and incident communication on a single page. Add a copy-grade summary block, a service boundaries table, and a link out to the trust center for compliance proofs.

Source-poolMay 07, 2026

Contradiction audit before AI quotes the wrong claim

AI systems quote the clearest available claim, even when that claim conflicts with newer pricing, product, implementation, or support details elsewhere on the site.

How

Inventory every public claim on pricing, plan limits, implementation timeline, and support boundaries. Classify conflicts by type, assign a source of truth, fix in priority order, retest the prompt family the same week.

DistributionMay 07, 2026

Engineer for citation presentation, not just citation share

Google now adds subscription labels, community labels, author names, inline placement, favicons, and hover previews to AI citations. Click confidence is the new battleground.

How

Set canonical site name and favicon. Use bylined practitioner content on LinkedIn and Medium for author-name treatment. Tighten the meta description and the first 60 words on key pages so hover previews sell the click.

ContentMay 06, 2026

Integration and compatibility pages answering system-fit prompts

Implementation guides answer rollout. Integration pages answer whether the tool connects to Salesforce, how data sync works, what is native versus API, and what limits the buyer must know.

How

Write one page per major connector. List scope, data direction, sync limits, native versus API, and unsupported scenarios. Add proof blocks: customer name, volume, frequency. Link to pricing and trust pages for follow-on prompts.

TechnicalMay 06, 2026

Page-collision audit when AI cites the wrong URL

The brand is still cited, but the wrong internal page wins. A pricing page outranks a comparison page. An old blog post steals citations from a money page.

How

Map prompt to cited URL across the prompt set. Compare the job each page is meant to do. Diagnose the collision type: outdated proof, broader passage, or stronger schema. Reassign content, add canonical, and rewrite the rightful winner.

MeasurementMay 06, 2026

Three-layer AI search measurement: prompts, logs, conversions

Prompt dashboards alone miss what AI crawlers actually do and whether AI traffic converts. The stack has split into three layers.

How

Run a fixed prompt set across all five major surfaces every week. Ingest CDN or Cloudflare logs to see AI crawler traffic the client analytics layer misses. Wire conversion instrumentation, including OpenAI CAPI and pixel measurement, so AI sessions land in the same revenue model.

ContentMay 05, 2026

ROI and TCO pages with formulas visible

ROI pages with hidden math fail. Finance and procurement prompts need transparent assumptions, formulas, and scenario ranges.

How

Publish the full cost stack, the assumption set, and the formula. Provide a low, mid, and high scenario. Cite the named source for each input. Route to case studies and trust content for proof of outcomes.

TechnicalMay 05, 2026

HTML parity audit for JavaScript-heavy sites

If the answer block, schema, internal links, or proof passages only exist after hydration, AI retrieval cannot rely on them.

How

Capture the initial server HTML and compare it to the hydrated DOM for every priority page. Check answer blocks, schema, internal links, and citations. Score severity. Anything answer-critical that is hydration-only gets fixed in the next sprint.

SchemaMay 04, 2026

Trust center and security pages structured for procurement prompts

Thin trust center copy fails enterprise evaluation. AI systems cannot quote what they cannot find.

How

Expose SOC 2, ISO, GDPR, and data residency claims as clean copy with named auditor and date, not gated PDFs. Add a security questionnaire summary table. Apply Article schema with the publish date. Link to subprocessor list and incident history.

MeasurementMay 04, 2026

GEO change log connecting releases to prompt outcomes

Most GEO teams can see prompt movement but cannot explain it. A durable change log fixes that.

How

Log every release, content update, schema change, and outreach placement next to the prompt family it was meant to affect. Record day-one, day-seven, and day-thirty observations. Make it the artifact every weekly readout starts from.

§04 The 12-step playbook

If you want to do AEO yourself, this is the order.

Each step is independent enough to ship in a week. Run them in sequence on a 90-day pilot. Skip none of them.

  1. Step 01

    Run a citation baseline across all five surfaces

    Before optimizing anything, see what AI actually says about your brand. Most teams skip this and end up working from intuition. The baseline anchors every later decision.

    • · Query 50 to 150 prompts in your category across ChatGPT, Claude, Gemini, Perplexity, and Google AI Overviews
    • · Log citation share, recommendation rate, and the cited URL for every result
    • · Save screenshots and timestamps so drift becomes obvious week over week
    • · Flag every prompt where a named competitor wins and your brand is missing
  2. Step 02

    Curate a fixed prompt set that decides your category

    A working set of 80 to 200 prompts that decide the sale in your category. This becomes the source of truth for every metric you report. You cannot maintain the page without it.

    • · Pull buyer prompts from sales calls, support tickets, and the public Profound or PromptWatch research data
    • · Cover four families: category education, vendor shortlist, feature comparison, post-purchase
    • · Map each prompt to the page that should answer it
    • · Lock the list. Treat additions as a change request, not an edit
  3. Step 03

    Audit retrieval before you touch content

    If the answer block, schema, or internal links only exist in the hydrated DOM, AI retrieval cannot rely on them. Fix the technical floor first or every later optimization compounds the wrong way.

    • · Check robots.txt is not blocking GPTBot, ClaudeBot, PerplexityBot, or Google-Extended
    • · Run an HTML parity audit on every priority page: initial HTML vs hydrated DOM
    • · Verify FAQ, HowTo, Article, Product, and Organization schema render in the source
    • · Test rendering on a slow connection: if the answer block needs hydration, it is unreliable
  4. Step 04

    Engineer answer blocks at the top of every priority page

    44 percent of AI citations come from the first 30 percent of the text. Put the answer first, then the supporting evidence. This is the single biggest content lever and it is the cheapest one to ship.

    • · Open every priority page with a 40 to 60 word direct answer block
    • · Follow it with three to five supporting bullets that include numbers and named sources
    • · Add a structured FAQ section with at least four questions and FAQ schema
    • · Strip marketing copy that buries the answer further down the page
  5. Step 05

    Apply correct schema to the surfaces AI actually reads

    FAQ schema lifts citation by a measured 350 percent in controlled studies. Article, HowTo, Product, and Organization schema set entity coherence. Generic schema without entity coherence is now a deprecated signal.

    • · Pick the right schema per page type: Article for editorial, HowTo for playbooks, Product for SKUs, Service for offers, FAQPage for FAQ blocks
    • · Use Organization schema with a single canonical sameAs set across LinkedIn, Crunchbase, and your site
    • · Validate every page in Schema.org and Google Rich Results
    • · Re-check schema after every deploy. Frameworks strip it more often than teams realize
  6. Step 06

    Build buyer-stage pages around the prompt families you mapped

    AI cites pages that answer the specific evaluation question, not generic category pages. Each prompt family needs a dedicated asset.

    • · Comparison pages: feature matrix, three tables, named competitors
    • · Pricing pages: formulas visible, scenario ranges, plan limits stated
    • · Trust center: SOC 2 and ISO with auditor and date, not gated PDFs
    • · Use case pages: one per job-to-be-done, with named role and operating context
  7. Step 07

    Get into the third-party pool AI already cites

    AI rarely cites your own site for category-defining queries. It cites Reddit, G2, analyst roundups, and the comparison sites already in the source pool. The work is making sure your brand is represented there.

    • · Identify the 10 to 20 domains AI cites most for your category prompts
    • · Pitch contributed pieces, expert quotes, or product listings to each one
    • · Seed Reddit threads with practitioner answers, not promotional copy
    • · Maintain G2 and Capterra listing freshness: response rate, feature checks, version notes
  8. Step 08

    Engineer citation presentation, not just citation share

    Google now adds subscription labels, community labels, author names, inline placement, and hover previews to AI citations. Click confidence is the new metric. Showing up is no longer the whole game.

    • · Set canonical brand name, favicon, and Organization schema so the source label is clean
    • · Publish bylined practitioner content on LinkedIn and Medium for author-name treatment
    • · Tighten the meta description and the first 60 words on every priority page
    • · Connect Google subscription linking if you have a paywall or membership
  9. Step 09

    Track prompts, logs, and conversions in one model

    Prompt dashboards alone miss what crawlers do and whether the traffic converts. The measurement stack has split into three layers. Each one needs its own instrumentation.

    • · Run the fixed prompt set every week and log citation share, recommendation rate, and source URL
    • · Ingest CDN or Cloudflare logs to see GPTBot, ClaudeBot, PerplexityBot, and Google-Extended traffic
    • · Wire the OpenAI Conversions API plus pixel measurement for paid ChatGPT traffic
    • · Land all three layers in one weekly readout so the team can see cause and effect
  10. Step 10

    Maintain a change log that connects releases to prompt outcomes

    Most teams can see movement but cannot explain it. A durable log of what changed, when, and which prompt family it was meant to affect makes root-cause analysis weeks faster.

    • · Log every content update, schema change, page ship, and outreach placement
    • · Record day-one, day-seven, and day-thirty observations on the affected prompt family
    • · Tie each change to a named owner so accountability is concrete
    • · Review the log at the start of every weekly readout
  11. Step 11

    Course-correct within seven days when something drifts

    AI source pools have 40 to 60 percent monthly churn. Drift is normal. The discipline is responding to it inside the week, before a competitor cements the new pool position.

    • · Set an alert threshold on citation share by prompt family
    • · When a domain enters or exits the cited pool for a priority prompt, diagnose within 24 hours
    • · Ship a content refresh, schema fix, or publication outreach response inside seven days
    • · Document the response in the change log so the pattern compounds
  12. Step 12

    Run a quarterly contradiction and page-collision audit

    AI quotes the clearest available claim. When pricing, implementation, support, or product detail pages disagree, AI picks one and runs with it. Quarterly cleanup keeps the source pool from feeding bad answers.

    • · Inventory every public claim on pricing, plan limits, support, and product detail
    • · Classify conflicts: outright contradiction, stale stat, ambiguous scope, missing context
    • · Assign a source of truth and propagate it across the cluster
    • · Retest the affected prompt family the same week and log the result

§05 Tactics we deprecated

These used to work. They do not anymore.

Stop running these. Migrate to the live tactics in §03 or to the replacements listed below.

  • Markdown mirror pages for AI crawlers
    Apr 10, 2026
    Otterly 14-day controlled experiment: AI crawlers visited HTML pages but recorded zero visits and zero citations against the Markdown mirrors across ChatGPT, Perplexity, AI Overviews, and Claude.
    Server-side rendered HTML with clean schema
  • Hidden text to seed AI with extra context
    Apr 09, 2026
    Four of six platforms ignore hidden text entirely. Copilot flags pages with hidden text as unsafe. Gemini actively reports prompt injection attempts.
    Visible answer blocks at the top of each section
  • Self-promotional best of listicles
    Mar 15, 2026
    Google explicitly cracked down on this pattern in 2026 core updates, with reported 30 to 50 percent visibility drops on offending pages.
    Comparison content with named competitors and structured feature matrices
  • Generic FAQ schema without entity coherence
    Mar 01, 2026
    FAQ schema lifts citations only when the questions match real prompts and the Organization schema, sameAs set, and author entity all agree. Generic FAQ alone no longer signals quality.
    FAQ schema tied to prompt families plus Organization and Article schema with a clean entity set
  • Keyword density and exact-match anchor text
    Feb 01, 2026
    AI retrieval scores passage clarity and source corroboration, not keyword frequency. Exact-match anchors signal manipulation to most modern crawlers.
    Natural anchor variation and 40 to 60 word direct answer blocks per section
  • llms.txt as a citation lever
    Apr 05, 2026
    SE Ranking analysis of 300,000 domains found no measurable impact of llms.txt on citation frequency. Treat as hygiene, not a lever. Only 10 percent of sites have one and not a single top-1,000 site has implemented it.
    Server-side rendering, schema, and answer-block engineering

§06 How we maintain this page

Live, because the platforms move every week.

We track 6 answer engines on a continuous schedule. Every change we observe goes into our internal Brain, the same research library the AI Visibility Index publishes from.

Every morning, the latest validated tactics, platform deltas, and deprecation calls are reflected here. Nothing on this page is older than this week unless it is in §05 for a reason.

Platforms tracked

ChatGPTClaudeGeminiPerplexityGoogle AI OverviewsGrok

Prompts tracked

1,500+

Across all engagements, refreshed weekly. Last sync May 13, 2026.

Cadence

Weekly prompt sweep, daily page refresh

Tactics validated in the last 30 days: 13. Deprecations issued: 6.

§07 Want us to do this for you?

If you would rather not run this yourself, that is why we exist.

We run the playbook as a managed service. Pilot pricing on agreed outcomes. You pay for tools and APIs during the pilot and a success fee only if we hit the goal in the engagement letter.

Ready to become the answer AI gives?

Book a 30-minute discovery call. We'll show you what AI says about your brand today. No pitch. Just data.