Strategy10 min read

AI Search Is Splitting Into Two Optimization Problems: Ranking Pages and Grounding Answers

SP

Subia Peerzada

Founder, Cite Solutions · May 8, 2026

For most of the last year, the GEO market has treated one question as if it covered everything.

Can we get the page seen?

That is still a real question. It is no longer the whole job.

On May 6, 2026, Microsoft published a Bing engineering post called "Evolving role of the index: From ranking pages to supporting answers". It is one of the clearest first-party explanations we have seen for how AI-answer retrieval differs from classic search retrieval.

Microsoft's framing is blunt:

  • traditional search asks which pages should a user visit
  • grounding asks what information can an AI system responsibly use to construct an answer
  • the unit of value shifts from the document to groundable information with clear provenance
  • a valid grounding outcome can be abstain when evidence is insufficient

That is not a small wording tweak. It gives the market a clearer first-party framework for separating page visibility from answer support.

On May 5, 2026, Peec AI published "Patterns we see in ChatGPT query fanouts", based on 5 million fanouts collected between Apr 1 and Apr 21, 2026 across ChatGPT, Perplexity, and Grok. The useful finding was not just scale. It was what the models add behind the scenes. Peec found that ChatGPT frequently injects words like best, top, comparison, reviews, tools, software, and features into its hidden search paths.

Put those two releases together and a more useful market model comes into focus.

AI search is splitting into two optimization problems: ranking pages and grounding answers.

Ranking vs grounding

One shared web index now supports two different optimization jobs

Microsoft's May 6, 2026 grounding framework and Peec's May 5 query-fanout research point to the same conclusion: ranking a page and supplying answer-grade evidence are related, but they are not the same thing.

Operator takeawayIf your program only asks whether a page ranks, you are measuring the wrong win condition for a large share of AI answers.

Lens

Primary question
Unit of value
What tends to win
Failure mode
Valid outcome
Primary owner
What to build now

Optimization job 1 · ranking pages

Which pages should a user visit?
The page or document.
Strong page-level relevance, authority, and technical search hygiene.
The page does not rank, so the user never sees it.
Return ranked options and let the user choose.
SEO and web teams.
Pages that can rank for the target query family.

Optimization job 2 · grounding answers

What information can an AI system responsibly use to construct an answer?
Groundable information: discrete facts, comparisons, and claims with clear provenance.
Answer-ready passages, clear attribution, current proof, and corroborating source types such as reviews or comparisons.
The model can see the page but still cannot safely use the claim, or it routes to a different source type for support.
Answer when supported. Withhold or down-weight the claim when evidence is weak, stale, or conflicting.
SEO, content, PR, product marketing, and evidence-owning subject matter experts.
A parallel evidence system: answer blocks, updated facts, comparison assets, review-surface coverage, and expert-backed source material.
Sources: Microsoft Bing blog, May 6, 2026; Peec AI, May 5, 2026.

If you want the evidence that Google rankings have already stopped acting as a full proxy for AI visibility, read Why Google Rankings No Longer Predict AI Citations. This post goes one layer deeper. It explains why the split is happening, and what brands should build now.

What changed this week

Microsoft's post names the split directly.

Classic search and AI grounding still share a lot of infrastructure. Both rely on crawling, understanding, and ranking the web. But Microsoft says they optimize for different outcomes.

That difference changes what the index is trying to deliver.

In classic search, the page is the product. The engine surfaces options, the user clicks, and the user decides what to trust.

In grounding, the page is not the product. The page is raw material for an answer. The model has to decide whether a claim is current enough, specific enough, and attributable enough to support a committed response.

That is why Microsoft's table matters so much. It formalizes five operator-level differences:

  • the core question is different
  • the unit of value is different
  • the error model is different
  • the acceptable outcome is different
  • the accountability standard is different

In search, a mediocre result can still be acceptable because the user can self-correct by clicking something else.

In grounding, a weak fact can compound inside a synthesized answer. That makes evidence quality a production issue, not a traffic issue.

Why Peec's fanout data makes the split more urgent

If Microsoft's post explains the theory, Peec explains the pressure that theory creates in practice.

Peec's May 5 study found:

  • ChatGPT averages 2.1 fanouts per prompt
  • Perplexity averages 1.4
  • Grok averages 6.8
  • ChatGPT often injects best, top, comparison, reviews, tools, software, and features into hidden query expansions

That matters because the model is often not checking one page against one prompt. It is checking a web of supporting subquestions.

A product page might rank for the broad category term. That does not mean it wins the hidden retrieval paths for:

  • feature comparison
  • review corroboration
  • best-of framing
  • pricing context
  • category alternatives
  • current-year evaluation

This is exactly where a lot of GEO programs break.

They keep measuring visibility as if one well-ranked page should carry the whole burden of proof.

It usually cannot.

If the model silently fans out into comparisons and reviews, then answer support starts depending on assets that are not captured by a rank tracker alone. Comparison pages. G2 profiles. Reddit threads. Analyst writeups. Expert posts. Updated documentation. Named research. Current proof blocks.

That is why the new problem is not simply “get cited.”

The new problem is supply answer-grade evidence across the source types the model actually uses.

Ranking pages and grounding answers reward different strengths

This is the part that most teams need to make operational.

Ranking pages still rewards page-level strength

Classic search still cares a lot about:

  • crawlability and indexation
  • topical relevance
  • internal links
  • page quality
  • authority signals
  • search intent fit

None of that goes away. If your pages are hard to crawl, poorly structured, or thin, you create a problem before grounding ever starts.

Grounding answers rewards evidence-level strength

Grounding asks a tougher set of questions:

  • can the model extract one supportable claim cleanly?
  • is the claim recent enough to trust?
  • can the model attribute that claim clearly?
  • does another source corroborate it?
  • is this the right source type for the subquestion?
  • if the evidence conflicts, should the system hold back?

That is a different job.

It is why a decent-ranking page can still lose AI visibility, and why a non-ranking asset can still shape the final answer.

We have already seen adjacent parts of this pattern across the site:

This week's shift sits underneath all three. It says the market's object of optimization is changing.

The practical consequence: owned pages are no longer enough on their own

A lot of teams still behave as if the homepage, product page, and a few blog posts should do most of the GEO work.

That is too narrow for the fanout patterns we are seeing.

If ChatGPT is silently searching for comparisons, reviews, features, and top lists, your grounding system needs coverage for those evidence paths too.

That does not mean publishing manipulative listicles or flooding the web with low-grade AI content. It means building a cleaner evidence stack.

For most B2B brands, that stack now includes:

  • owned pages with direct answers and named proof
  • comparison pages for buyer-side alternatives
  • review-surface coverage where customers and operators validate the product in public
  • expert commentary that gives models a credible human source to cite
  • current facts that are easy to verify and easy to attribute
  • third-party references that support claims you should not be the only one making about yourself

The winner is not the brand with the loudest page. The winner is the brand whose claims survive grounding.

What brands should do now

This is the part where the topic becomes useful, not just interesting.

1. Separate page visibility from answer support in your operating model

Do not track “AI visibility” as one blended idea.

Review your priority prompts and ask two different questions:

  • do we have pages that can rank or get retrieved for this topic?
  • do we have supportable, attributable evidence for the subclaims the model is likely to assemble?

Those are related, but they are not the same task.

2. Audit your claims, not just your URLs

Take the ten to twenty claims that matter most to revenue and evaluate each one like a model would.

Examples:

  • fastest implementation
  • strongest integrations
  • lowest total cost
  • best enterprise support
  • most secure option
  • easiest setup for a specific team

Then ask:

  • where is the owned proof?
  • where is the corroborating proof?
  • where is the current date or version context?
  • where would a review or comparison fanout send the model?

The exercise usually exposes a gap. Teams often have pages. They do not always have evidence.

3. Build source-type coverage for hidden fanouts

Peec's finding about injected words should change content planning.

If the model often adds comparison, reviews, and features, then those are not edge cases. They are part of the retrieval path.

That means your roadmap should include assets built for those evidence needs, not only broad category pages.

For B2B software, that often means stronger:

  • comparison pages
  • implementation pages
  • support and SLA pages
  • integration pages
  • review-generation and review-response workflows
  • expert-authored posts tied to real operators

4. Give grounding more owners than SEO alone

SEO can own part of this. SEO cannot own all of it.

Grounding quality is influenced by:

  • content strategy
  • product marketing
  • customer proof
  • PR and analyst relations
  • support documentation
  • subject matter experts
  • web teams maintaining page structure and freshness

If the claim needs clear provenance, the people who produce the proof matter as much as the people who publish the page.

5. Treat abstention risk as a real business problem

Microsoft's “answer when supported; abstain when evidence is insufficient” line is easy to skim past. It should not be skimmed past.

That is the market telling you that weak evidence does not only hurt ranking. It can remove you from the answer entirely.

A brand can disappear from a high-intent answer surface not because the page was unreachable, but because the system could not justify the claim strongly enough.

That is a different failure mode, and it needs a different fix.

Are you optimizing for page visibility, answer support, or both?

Cite Solutions audits the pages, proof assets, third-party sources, and fanout gaps that determine whether AI systems can responsibly use your brand in an answer. If your team is still treating GEO like rank tracking with a new label, we can show where that breaks.

Get Your AI Visibility Audit

What this means for the GEO market

This is a consequential platform post because it makes a blurry market distinction explicit.

Microsoft did not solve grounding. It did publish a cleaner vocabulary for discussing it.

A lot of current GEO confusion comes from mixing two ideas into one:

  • visibility of pages
  • usability of evidence inside an answer

Once you separate them, a lot of current market behavior makes more sense.

Why rankings keep missing citation outcomes. Why comparison and review surfaces keep showing up. Why some brands with strong SEO still lose in AI answers. Why a stale stat can be more damaging than a weak ranking. Why first-party claims need corroboration. Why prompts fan out into source types your own site does not control.

The market is moving from a page-discovery problem toward an evidence-supply problem.

That does not replace SEO.

It changes what SEO has to work with.

FAQ

Does this mean SEO matters less now?

No. SEO still determines whether your content is reachable, understandable, and eligible to compete. What changed is that eligibility does not guarantee usability inside an AI answer. Grounding adds another standard on top of retrieval.

What is “groundable information” in practical terms?

Microsoft uses the term to describe discrete, supportable information with clear provenance. In practice, that usually means claims with named attribution, current facts, comparison clarity, and enough context that a model can use the statement responsibly.

Why are review and comparison surfaces so important now?

Because Peec's May 5 fanout research shows ChatGPT often injects words like comparison and reviews into hidden retrieval paths. That means those source types influence answer formation even when the user did not type those words explicitly.

Can a brand win grounding without ranking first?

Sometimes, yes. A non-ranking or lower-ranking asset can still influence the final answer if it supplies a strong piece of evidence that surfaces through a fanout path, a cited source pool, or a corroborating source type. That is one reason AI visibility and SERP rank diverge so often.

What should a B2B team build first?

Start with a claim audit. Identify the commercial claims that matter most, map the owned and third-party evidence supporting each one, then prioritize the missing comparison, review, and proof assets. After that, tighten the pages so those claims are easy to extract and attribute.

The bottom line

Microsoft's May 6 post and Peec's May 5 fanout study point to the same conclusion from two different directions.

Classic search still asks which pages deserve visibility.

AI grounding asks which pieces of evidence deserve to support an answer.

The teams that separate those two jobs will build better systems, publish better assets, and diagnose AI visibility problems faster.

The teams that keep treating both jobs as one will keep asking why a page that ranks well still fails to show up when the answer actually matters.

Ready to become the answer AI gives?

Book a 30-minute discovery call. We'll show you what AI says about your brand today. No pitch. Just data.