The first wave of GEO work was about one question: can we get cited at all?
That is still the first gate. It is no longer the whole game.
On May 6, 2026, Google published "5 new ways to explore the web with generative AI in Search", a product update for AI Mode and AI Overviews that looks small if you read it like UI polish. It is not small.
Google added five things that materially change how sources compete inside the answer:
- •follow-on article suggestions under AI responses
- •subscription-labeled links for users' connected news subscriptions
- •firsthand perspectives with creator, handle, or community context
- •more inline links placed next to the exact sentence or bullet they support
- •hover previews that show page and site context before the click
Google also said users were significantly more likely to click links labeled as part of their subscriptions in early testing.
That last detail matters more than it might seem. It means the battle is no longer only about source selection. It is also about source presentation.
Citation presentation shift
Google added a second competition layer after source selection
The old question was whether your brand earned a citation. The new question is whether your citation looks credible, relevant, and worth clicking inside the answer itself.
Google change
What changed
Why it matters
Asset that wins now
What to do now
If you need the source-eligibility layer first, start with our guide to Google AI Mode optimization. This post is about what happens after you make the source list.
What changed on May 6
Google's official post was explicit. The company is "continuing to improve how we show links" in AI Search and is "developing new ways to help you find the sources, brands and websites you value."
That wording matters because it separates two jobs that were previously blurred together:
- •choosing a source
- •helping the user trust and click that source
The five updates map cleanly to that second job.
Google is extending the answer into a source-selection interface
The first update, Explore new angles, adds additional article pathways after the answer. That means a brand can win the first citation and still lose the session if a stronger second-click asset appears immediately below.
The second update, subscription labels, adds a trust cue before the click. For publishers, membership businesses, and paid research brands, that is not cosmetic. Google directly said the label improved click propensity in early testing.
The third update, firsthand perspectives, adds identity context. A creator name, handle, or community label gives the user one more reason to trust a source before leaving the answer.
The fourth update, more inline links, changes the importance of section-level page design. If the proof lives right next to the sentence, the sentence has a better chance to earn the link.
The fifth update, hover previews, means page title quality and source framing now affect performance inside the answer layer itself.
That is why I think this update deserves a stronger read than "Google shipped some better links."
Google just turned citation presentation into a real operating layer.
Why this is bigger than a UI tweak
A lot of GEO commentary still treats citation visibility like a binary scoreboard. You are either cited or you are not.
That was already too simple. After May 6, it is clearly wrong.
The new competition looks more like this:
| Layer | Old question | New question |
|---|---|---|
| Source selection | Did Google include us? | Did Google include us for the right part of the answer? |
| Source presentation | Mostly ignored | How does the source look before the click? |
| Session continuation | Mostly ignored | Does our asset win the next exploration path too? |
This matters because AI search already compresses traffic. In our analysis of AI referral traffic as a decision-stage channel, the main lesson was that fewer visits can still be higher-value visits. Google's new link treatments raise the bar again. If the visit pool is smaller and more curated, the presentation of each link matters more, not less.
The market implication: identity and click confidence are now GEO variables
For most brands, the first practical implication is simple.
A generic source label is weaker than a source with visible identity.
When Google says it will add creator names, handles, or community names to firsthand perspectives, it is telling brands that who is speaking is becoming more legible at the moment of choice.
That changes the relative value of assets like:
- •expert LinkedIn posts
- •named practitioner essays
- •community discussions with real participants
- •founder commentary tied to firsthand experience
- •subscription content that already has a user relationship behind it
We already covered one part of this in our LinkedIn analysis, which showed why LinkedIn has become a major AI citation source for B2B brands. Google's May 6 update extends the logic. It is not only that these sources can be cited. It is that Google now has more ways to show why a user should care about them.
This is a meaningful shift in the economics of authority.
A plain brand page still matters. But a plain brand page sitting next to a named operator, a recognized community, or a subscription-tagged source may no longer be the most attractive click.
Passage design now affects click opportunity, not just citation opportunity
The other major implication sits on the page itself.
For the last few months, the strongest structural advice in GEO has been some version of the same principle: write pages in answer-sized units, keep proof close to the claim, and make sections extractable. We made that case in Passages Beat Pages.
Google's new inline-link behavior makes that advice more urgent.
If Google places links beside the exact sentence or bullet it wants to support, then the section has to do more than answer the question. It has to produce a clean clickable moment.
That means weak sections become twice as expensive:
- •they are less likely to be cited
- •even if the page is cited, the strongest link opportunity may attach to a better-structured competitor section instead of yours
The design rule is straightforward.
Every major section on an important page should now answer four tests:
- •does it answer one clear sub-question?
- •does it place the proof close to the claim?
- •does it carry enough context to stand alone if extracted?
- •would a user understand why to click this source from one sentence alone?
That last question is the new one.
Publishers and membership brands just got a more specific AI playbook
The subscription-linking update is the least discussed part of the May 6 release. It may end up being one of the most important.
Google said users were significantly more likely to click subscription-labeled links in early testing. That gives publishers something they have been missing in most AI visibility debates: a concrete product-level clue about how to recover more value from AI answer layers.
This matters against a rough backdrop. Our May 7 intel note cites Nieman Lab's May 6 reporting on publisher pressure, including Chartbeat data from March 2026 showing steep referral declines over the last two years for smaller and mid-sized publishers. If AI surfaces compress traffic and Google simultaneously introduces presentation features that change which remaining links get clicked, then the operational question becomes obvious.
Which publishers are set up to win the click when the answer does include them?
For media, subscription businesses, and research firms, that leads to a sharper checklist:
- •implement Google's subscription linking if the model fits your business
- •make bylines, section titles, and article framing stronger than generic newsroom packaging
- •publish first-person or expert-led explainers where the identity adds value before the click
- •design article titles for hover-preview clarity, not only search-result curiosity
That is a tighter and more useful playbook than simply saying "publishers need more citations."
GEO measurement now needs a presentation layer
I do not think this replaces the measurement model we outlined in AI search measurement is splitting into three layers. It adds a more specific requirement inside the prompt and answer layer.
From here on out, serious teams should separate at least three things inside AI answer reporting:
- •citation appearance: did we show up?
- •citation placement: where inside the answer did we show up?
- •citation presentation: what context, identity, preview, or label did the user see before clicking?
Most teams do not capture that third layer yet.
They should.
Otherwise the report will say the brand appeared, while the real outcome was weaker:
- •the citation had no compelling source identity
- •the better click path went to a subscription-labeled competitor
- •the inline link sat next to the competitor's proof point, not yours
- •the hover preview made your page feel vague and the competitor's page feel exact
That is no longer a theoretical problem. Google just productized it.
What brands should do now
You do not need to panic and rebuild the whole content program this week. You do need to adjust what you consider a GEO win.
1. Audit your top cited assets for presentation strength
Start with the pages and off-site assets already appearing in AI answers. Review whether they have:
- •visible expert identity
- •strong titles that make sense in a hover preview
- •section-level proof near important claims
- •clear article or page framing that earns the click quickly
2. Build more named-expert and firsthand assets
If Google is explicitly elevating creator, handle, and community context, then anonymous corporate publishing loses relative strength. Give more of your important content a speaker the user can evaluate.
3. Tighten sections so they can win the inline-link moment
The goal is no longer only to have a strong page. The goal is to have strong citation-ready sections that can carry a link next to the exact point being made.
4. Treat title quality as GEO infrastructure
Hover previews make title quality matter again in a different way. The title should tell the user exactly why the page is the right next click, not just try to maximize broad SERP appeal.
5. Add citation presentation to reporting
For important prompt sets, record not just whether the brand appeared, but how it appeared. This is now part of performance analysis.
Are your citations built to win the click, not just the mention?
Cite Solutions audits citation eligibility, source presentation, and click-confidence gaps across Google AI Mode, AI Overviews, ChatGPT, Gemini, Claude, and Perplexity. If your brand is visible but under-clicked, we can usually show why.
Book a GEO AuditWhat this changes for the GEO market
The cleanest way to say it is this:
GEO is moving from source acquisition toward source merchandising.
That does not mean SEO-style click-through tinkering becomes the new whole game. It means answer-engine optimization is maturing into a fuller discipline. The work now spans:
- •getting the source selected
- •designing the source to look trustworthy in context
- •shaping the next click after the answer
That is a broader remit than the early-market idea of GEO as prompt monitoring plus answer blocks.
It also strengthens the case for why brands need cross-functional ownership. Content, PR, social, subscription product, analytics, and technical SEO all touch this new presentation layer.
The brands that treat this as a real workflow will outperform the brands that keep using a simple cited-or-not-cited scoreboard.
FAQ
What is citation presentation in AI search?
Citation presentation is the way a cited source appears inside an AI answer before the user clicks. It includes things like the source label, creator or community context, inline placement, subscription badges, and hover previews. Google's May 6, 2026 AI Search update made this layer much more visible in AI Mode and AI Overviews.
Why does Google's May 6 update matter for GEO?
Because it changes the unit of competition. Before this update, most GEO teams focused on whether a brand earned a citation at all. After this update, the brand also has to win the way that citation looks inside the answer. Google added more inline links, identity context, subscription labels, and hover previews, all of which shape click behavior before the visit starts.
Does this mainly affect publishers, or all brands?
Publishers have the clearest immediate use case because Google explicitly said subscription-labeled links improved click propensity in early testing. But the broader shift affects all brands. B2B companies, software vendors, marketplaces, and service brands all depend on source trust, clear page framing, and stronger expert identity when AI answers decide which link feels worth clicking.
What should B2B brands change first?
Start by reviewing the assets that already get cited. Tighten page titles, add named expert context where it helps, move proof closer to major claims, and create more section-level answers that can carry an inline link cleanly. Then update reporting so citation presentation is measured alongside citation appearance.
The bottom line
Google's May 6 release did not just improve AI Search navigation. It changed what a citation win means.
From here forward, the best GEO programs will treat visibility as a two-step competition: earn the source, then earn the click.
That is a harder job than raw citation chasing. It is also a better one, because it aligns the work with the outcome brands actually care about. Not just being mentioned. Being chosen.
Continue the brief
Google Says AI Search Is Growing Search Usage. GEO Belongs in the Core Search Budget.
Google says AI Mode and AI Overviews are bringing people back to Search more often, while AI response costs keep falling. That changes how brands should budget for GEO.
Google AI Overviews Are 91% Accurate. Their Sources Often Can't Prove It.
A joint study by Oumi and The New York Times tested 4,326 Google searches. Accuracy improved from 85% to 91% with Gemini 3. But 56% of correct answers now cite sources that don't actually support the answer, up from 37% under Gemini 2. Here's what that means for content strategy.
AI Referral Traffic Is Not a Traffic Channel. It Is a Decision-Stage Channel.
Conductor's 2026 benchmark and Statcounter's latest referral-share data point to the same shift: AI traffic is still smaller than search traffic, but the visit that does happen arrives later, faster, and closer to decision.
Framework
Learn the CITE framework behind our GEO and AEO work
See how Comprehend, Influence, Track, and Evolve turn AI visibility into an operating system.
Services
Explore our managed GEO services and AEO execution model
Audit, prompt discovery, content execution, and ongoing monitoring tied to AI search outcomes.
GEO Agency
See what a managed GEO agency should actually do
Compare real GEO operating work against generic reporting or tool-only approaches.
Audit
Start with an AI visibility audit before execution
Understand prompt coverage, recommendation gaps, source mix, and where competitors are winning.