If you are running one GEO program for ChatGPT, Claude, and Gemini, the new Muck Rack data says you are running it wrong.
On May 7, 2026, Muck Rack published the third edition of Generative Pulse: What Is AI Reading?. The study analyzed more than 25 million links cited by ChatGPT, Claude, and Gemini across 17 industries. Earned media still drives 84% of AI citations, a number that has held within a 7-point band across three editions since July 2025.
That is the story everyone is reading. The harder finding is buried below it.
The three platforms cite at completely different rates and pull completely different volumes of sources when they do cite. ChatGPT cites in 96% of responses with 5 sources per cited response. Claude cites in 55% with 13 sources. Gemini sits in the middle at 82% with 8 sources. The most-cited domain is different on every platform. The retrieval philosophies are not the same problem.
This post walks through the per-platform gap, explains why optimization is no longer one-size-fits-all, and gives you a five-step program to map your buyer's platform mix and adjust your earned-media plan against it.
Generative Pulse — Muck Rack, May 7, 2026
25 million links analyzed across ChatGPT, Claude, Gemini
17 industries. Third edition of the study. Earned-media share has held between 82% and 89% across all three editions since July 2025.
Citation rate (% of responses that include any citation)
Claude only cites in 55% of responses. ChatGPT cites in 96%. The two systems disagree on when citations are warranted.
Average sources per cited response
When Claude does cite, it pulls 2.6x as many sources as ChatGPT. The retrieval philosophies are not the same problem.
Top-cited domain by platform
ChatGPT
Wikipedia
Gemini
Claude
PubMed Central
Three platforms, three different ideas of what counts as the most trustworthy source on the web.
Earned media vs paid content (% of all AI citations)
Paid and advertorial content captures 0.3% of AI citations. The 280x gap to earned media has held across three editions since July 2025.
Press releases appear 3.5x more often in industry-trend queries than in best-of queries. Half of all journalism citations come from articles published in the last 12 months. Recency and trend-context drive the citation pool.
What Muck Rack actually measured
Generative Pulse is the only quarterly study that tracks AI citations at this scale across three of the four major consumer AI platforms. The May 2026 edition is the third installment, following July 2025 and December 2025 reports.
The numbers are stable. Earned media has captured between 82% and 89% of AI citations across all three editions. Journalism alone has held at 25% to 27%. Paid and advertorial content has stayed at 0.3%. The 280x gap between earned and paid is the most consistent finding across the entire dataset, as Muck Rack documented in its earlier December 2025 release and discussed in its own analysis of how AI affects PR.
What is new in the May edition is the per-platform breakdown.
ChatGPT trusts encyclopedias. Claude trusts academic journals. Gemini trusts community forums. Three platforms, three different ideas of what counts as a good source.
That sentence is the working insight for every B2B brand running a GEO program in 2026. The optimization problem is not "get cited by AI." It is "get cited by the platform your buyer actually uses, in the format that platform actually retrieves from."
Five reasons the per-platform gap matters
The aggregate 84% earned-media number flattens a much more interesting story underneath. Five findings inside the May report are individually load-bearing for content strategy.
Reason #1: Citation rate alone does not predict optimization difficulty
ChatGPT cites in 96% of responses. Claude cites in 55%. The naive reading is that ChatGPT is easier to optimize for because it cites almost every time.
The harder reading is the inverse. ChatGPT cites broadly with shallow source pools, which means citation slots are abundant but each one carries less weight in the final answer. Claude cites in just over half its responses but pulls 13 sources when it does, which means the citation pool is concentrated and competitive but every cited source has substantial influence on the synthesized answer.
Citation rate tells you the supply curve. Source count tells you the weight per slot.
Reason #2: Source count signals different retrieval philosophies
ChatGPT averages 5 sources per cited response. Claude averages 13. Gemini averages 8. That is a 2.6x range across the three platforms.
A 5-source retrieval pipeline is doing top-of-funnel synthesis. The model is picking the few most authoritative or most central documents and weaving them into a tight answer. A 13-source retrieval pipeline is doing wide-aperture grounding. The model is treating the citation pool as a mosaic of evidence and reasoning across all of it.
For a brand, those are different optimization problems. To win in ChatGPT, you need to be one of 5 documents the model treats as central. To win in Claude, you need to be one of 13 documents the model treats as evidence.
Reason #3: Top-cited domains reveal what each platform trusts
ChatGPT's most-cited domain is Wikipedia. Claude's is PubMed Central. Gemini's is Reddit. These are not adjacent sources. They reflect three different retrieval priors.
Wikipedia is the encyclopedic prior. The model trusts consensus-built reference content. Get into Wikipedia, and you get cited often by ChatGPT. Independent analyses of AI citation pools have repeatedly found Wikipedia and a small set of news outlets dominating the top of the list, as Nieman Lab documented for Reuters and Axios in 2025.
PubMed Central is the academic prior. The model trusts peer-reviewed primary research. Get into PubMed-indexed journals, conference proceedings, or rigorous white papers, and you get cited often by Claude.
Reddit is the community prior. The model trusts what verified users discuss in well-moderated subreddits. Get into Reddit threads through helpful answers, technical AMAs, or community endorsement, and you get cited often by Gemini.
Reason #4: Optimization is no longer one-size-fits-all
For most of 2024 and 2025, the GEO playbook treated citation optimization as a single problem. Get cited everywhere. Build authority broadly. Earn media coverage. The rising tide lifted all platforms.
That stopped being true in late 2025 when each major platform locked in distinct retrieval defaults. The optimization questions look completely different now.
An aggregate GEO program asks:
- •How do we get cited by AI?
- •Which earned media outlets matter?
- •What's our overall AI visibility score?
A platform-specific GEO program asks:
- •Which platform does our buyer actually use?
- •What is the top-cited domain on that platform?
- •What source-count target should our portfolio hit?
- •Are we tracking citation share separately on ChatGPT, Claude, and Gemini?
The May Muck Rack data is the cleanest evidence yet that the platforms have diverged. A B2B SaaS brand whose buyers use Claude needs a fundamentally different content portfolio from a brand whose buyers live in ChatGPT.
This is the same pattern we documented in Why Claude Cites Older Content Than ChatGPT. The platforms are tuning their retrieval stacks for different user behaviors.
Reason #5: Coverage decisions depend on platform mix
If your enterprise buyers are 70% Claude users and 20% ChatGPT users, your earned-media plan should look different from a brand whose buyers are 60% ChatGPT and 25% Gemini.
Most B2B teams have no measurement infrastructure to tell which platform their buyers actually use. They run a single GEO scorecard, average across platforms, and miss that 30% of their citation losses are concentrated on the platform that drives 70% of their pipeline.
This is the case for platform-specific share-of-voice tracking. The aggregate citation rate is too coarse to drive resource allocation.
Your buyer is not on every platform. Your GEO plan should not pretend they are.
We map your buyer's platform mix against the citation profile each platform actually rewards. ChatGPT, Claude, Gemini, and the source pools that drive each one. Built around the May 2026 Muck Rack data.
Book a Discovery CallWhy Claude only cites 55% of the time
Claude's citation rate is the outlier. ChatGPT cites in 96% of responses. Gemini in 82%. Claude in 55%.
Anthropic has been explicit that Claude is tuned for assistant-style responses where citations are included only when the model judges them necessary. The selectivity is a product decision, not a retrieval failure. When Claude does cite, it goes deep with 13 sources, and PubMed Central tops the list.
For a brand, this means Claude is the highest-stakes and highest-difficulty platform. The 55% citation rate compresses the citation supply by half compared to ChatGPT. The 13-source average expands the demand for high-quality reference material. The result is a citation pool that is small, dense, and dominated by primary research.
The optimization implication is sharp. To get cited by Claude, you need to either appear in PubMed-indexed academic literature, contribute to government or NGO research repositories, or publish original research that gets picked up by those sources. Generic SEO content does not appear in Claude's citation pool with any consistency.
Claude is a primary-research search engine wearing a chatbot's clothes. Optimize for it the way you would optimize for a panel of skeptical academics, not the way you would optimize for Google.
This pattern matches what enterprise customers are reporting on Claude Multiagent Orchestration. Multi-agent workflows magnify the selectivity. If Claude only cites 55% of single responses, a multi-agent pipeline running 12 sub-tasks may only surface 5 or 6 cited brand mentions across the whole flow.
Why ChatGPT cites Wikipedia first
Wikipedia tops ChatGPT's most-cited domain list. That is consistent across the July 2025, December 2025, and May 2026 editions of Generative Pulse. The pattern has not moved.
The reason is the encyclopedic prior. ChatGPT's retrieval stack treats Wikipedia entries as default-trusted starting points for fact-checking, definition queries, and disambiguation. When a user asks ChatGPT what something is or who someone is, the model reaches for Wikipedia first.
For B2B brands, this raises a specific question: does your category have a Wikipedia entry, and does your brand appear in it?
The fastest underutilized GEO move in 2026 is contributing accurate, well-sourced content to existing Wikipedia entries in your category.
A category-level Wikipedia entry that names your brand among practitioners or vendors will appear in dozens of ChatGPT responses about your category. The citation share is large, the implementation cost is low, and most B2B competitors are not investing in this surface.
This is why we documented Top Domains AI Search Cites as a separate research workstream. The top of the citation pool is not your industry blog. It is encyclopedic and journalistic content.
Why Gemini cites Reddit first
Reddit being the top-cited domain in Gemini is the most counterintuitive finding in the report. It is also the most consistent. Across all three editions of Generative Pulse, Reddit has held the top-cited slot for Gemini.
Google's Reddit data partnership, announced in February 2024, is the structural reason. Gemini's retrieval stack has weighted access to Reddit threads, AMAs, and subreddit discussions. The data is treated as community ground truth on practical questions, especially how-to and product-comparison queries.
For B2B brands, this changes the Reddit playbook. The Reddit AMA, the well-moderated subreddit answer, and the technical thread are now AI citation surfaces rather than just community channels. We covered this in Reddit AI Citations: B2B Strategy.
The implication for content portfolio is that Gemini optimization requires a community presence ChatGPT optimization does not require. A B2B brand serving Gemini-heavy buyers needs to budget for community participation as a citation channel, not a brand-marketing channel.
How to fix this in five steps
The findings are clean. The fix is operational. Five steps, all implementable inside one quarter.
Step 1: Map your buyer's platform mix
Run a customer survey or analyze referral logs to identify what percentage of your buyers use ChatGPT, Claude, Gemini, Perplexity, and Microsoft Copilot. Most B2B teams have never measured this. The answer will surprise you.
If 60% of your buyers use Claude and you have been optimizing for ChatGPT, you have been spending against the wrong citation pool. Platform mix is the input that drives every other resource allocation decision.
Step 2: Get into the platform-specific source pools
Each platform has a top-cited domain that anchors its retrieval prior. Build a workstream against the one that matches your buyer mix.
For ChatGPT-heavy buyers, audit your category's Wikipedia presence. Identify entries that should mention your brand and contribute well-sourced edits through Wikipedia's normal contribution process. For Claude-heavy buyers, identify the PubMed-indexed journals, government research repositories, or academic conferences that cite your category and build a research-publication track. For Gemini-heavy buyers, build a sustained Reddit presence in 3 to 5 high-relevance subreddits with verified accounts and helpful, non-promotional contribution.
Step 3: Calibrate source count expectations to each platform
If you are selling into a Claude-heavy market, your goal is to be one of 13 evidence sources the model retrieves. If you are selling into a ChatGPT-heavy market, your goal is to be one of 5 central documents.
The 5-source goal pushes you to dominate a small number of high-authority surfaces. The 13-source goal pushes you to spread evidence across a wider footprint. Neither is harder, but they are different problems and require different content portfolio shapes.
Step 4: Run platform-specific share-of-voice tracking
Stop running a single AI visibility scorecard. Run three. Track citation share on ChatGPT, Claude, and Gemini separately. Compare share-of-voice movement on the platform that matches your buyer mix.
We covered the measurement infrastructure in How to Run an AI Visibility Audit and Share of Voice in AI Search. The May Muck Rack data is the strongest argument yet for breaking aggregate scorecards into platform-specific ones.
Step 5: Cut paid content out of your AI citation budget
Paid and advertorial content captures 0.3% of all AI citations. The pattern has held across three editions of Generative Pulse, eleven months of data, and 25 million links. There is no realistic path to AI citations through paid placement.
If your marketing budget includes a line item for sponsored content with the goal of AI citations, redirect it. The same dollars spent on PR pickup, owned research, or community presence will produce orders of magnitude more citation share. The 84% to 0.3% gap is the cleanest budget reallocation argument in B2B GEO.
How this fits with the other AI citation evidence
The Muck Rack May 2026 data extends a citation-pattern thesis that has been forming across multiple independent datasets through 2026.
The Otterly URL AI Citation Study showed page type matters 80% more than URL structure. The Evertune 33,000-page analysis showed the median ChatGPT-cited page is 941 words with 4 H2 sections and 15 external links. The brand-authority correlation study showed brand mentions correlate at 0.664 with AI citations, ahead of backlinks and content quality.
Muck Rack adds the missing piece: the citation pool is platform-specific, not platform-aggregate. The 84% earned-media headline is true at every platform. But the source pools, citation rates, and source counts inside that 84% diverge sharply across ChatGPT, Claude, and Gemini.
The aggregate is no longer the unit of optimization. The platform is.
FAQ
Why does Claude cite less often than ChatGPT?
Claude cites in 55% of responses while ChatGPT cites in 96%, according to Muck Rack's May 2026 Generative Pulse report. The gap reflects a product decision by Anthropic to include citations only when the model judges them necessary for the answer, rather than as a default. When Claude does cite, it pulls an average of 13 sources, the highest of any major platform.
What is the most-cited domain on ChatGPT?
Wikipedia is the most-cited domain on ChatGPT according to all three editions of Muck Rack's Generative Pulse study, including the May 2026 edition. The pattern has held since July 2025. Wikipedia entries serve as default-trusted reference content for definition, fact-check, and disambiguation queries, which gives them disproportionate citation share in the ChatGPT pool.
How many sources does Gemini cite per response?
Gemini cites an average of 8 sources per cited response, according to Muck Rack's May 2026 study of 25 million AI citations. Gemini cites in 82% of responses, with Reddit as the most-cited domain. The 8-source average sits between ChatGPT's 5 and Claude's 13.
Does paid content get cited by AI search engines?
Paid and advertorial content captures 0.3% of all AI citations across ChatGPT, Claude, and Gemini, per Muck Rack's analysis. Earned media captures 84%. The 280x gap has held consistent across three editions of the study since July 2025. There is no measurable AI citation upside to paid placement.
Should I run a different GEO program for each AI platform?
Yes, if your buyers concentrate on one platform. The May 2026 Muck Rack data shows ChatGPT, Claude, and Gemini have different citation rates, source counts, and top-cited domains. A single aggregate GEO scorecard misses platform-specific gaps. Map your buyer's platform mix first, then build content portfolios against the source pool each platform retrieves from.
The takeaway
The 84% earned-media headline is the easy story. The harder story is that ChatGPT, Claude, and Gemini draw from different parts of the earned-media pool, in different volumes, with different citation rates.
If your buyers live in ChatGPT, your roadmap is Wikipedia, encyclopedic content, and category authority on a small number of central surfaces. If your buyers live in Claude, your roadmap is PubMed-indexed research, government or NGO citations, and a deeper evidence footprint. If your buyers live in Gemini, your roadmap is Reddit presence, community participation, and verified-account engagement in 3 to 5 high-relevance subreddits.
The aggregate optimization era is closing. The platform-specific era is here. Map the mix, build the source pool, and cut paid content out of the budget.
Your buyer's platform mix is the input that drives every other GEO decision.
We measure your buyer's actual usage across ChatGPT, Claude, Gemini, Perplexity, and Copilot, then build a content portfolio against the source pool each platform retrieves from. The aggregate scorecard is no longer the unit of work.
Book a Discovery CallContinue the brief
AI Search Is Splitting Into Two Optimization Problems: Ranking Pages and Grounding Answers
Microsoft's May 6 grounding framework and Peec's 5 million fanout study point to the same shift: ranking a page and supplying answer-grade evidence are now different jobs.
Why Claude Cites Older Content Than ChatGPT
Only 36% of Claude's journalism citations come from the past 12 months, versus 56% for ChatGPT. That recency gap is the cleanest evergreen wedge B2B has.
ChatGPT Hit FedRAMP Moderate. Federal AI Just Got Real.
OpenAI cleared FedRAMP Moderate on April 27. Federal agencies can now buy ChatGPT Enterprise. Training data is now a procurement signal.
Framework
Learn the CITE framework behind our GEO and AEO work
See how Comprehend, Influence, Track, and Evolve turn AI visibility into an operating system.
Services
Explore our managed GEO services and AEO execution model
Audit, prompt discovery, content execution, and ongoing monitoring tied to AI search outcomes.
GEO Agency
See what a managed GEO agency should actually do
Compare real GEO operating work against generic reporting or tool-only approaches.
Audit
Start with an AI visibility audit before execution
Understand prompt coverage, recommendation gaps, source mix, and where competitors are winning.