Most teams treat ChatGPT, Claude, Perplexity, and Gemini as one channel called "AI search." That framing hides the most useful planning input the 5W AI Platform Citation Source Index 2026 published this month.
Claude is the outlier on recency.
According to 5W Public Relations' AI Platform Citation Source Index 2026, only 36% of Claude's journalism citations come from the past 12 months. ChatGPT pulls 56% of its journalism citations from the same recent window. The Index synthesizes 680 million individual citations across six prior studies between August 2024 and April 2026, so this is not a small-sample artifact. Claude is structurally biased toward older, analytical content.
For a B2B SaaS brand with thin recent press but a deep library of evergreen documentation, case studies, and explainers, Claude is the most accessible GEO surface available right now.
This post walks through the diagnosis, the implications, and a step-by-step plan for capturing Claude's evergreen citation pool before everyone else notices it.
Want a Claude visibility audit that maps your evergreen library to the prompts buyers actually run?
We benchmark how often Claude already cites your documentation, case studies, and pillar pages, then close the gaps your competitors haven't touched yet.
Book a Claude GEO AuditWhat the recency gap actually looks like
The 5W Index is the cleanest snapshot we have of how each major LLM weights the freshness of its sources.
The headline numbers, drawn from the Everything-PR research write-up of the Index:
| Platform | % of journalism citations from past 12 months | Editorial fingerprint |
|---|---|---|
| ChatGPT | 56% | Wikipedia + Reddit + Forbes + Business Insider + Reuters + TechRadar |
| Claude | 36% | NYT + The Atlantic + The New Yorker + The Economist |
| Perplexity | Tilts recent on time-sensitive queries | Reddit + LinkedIn + NIH/PubMed + Microsoft |
| Gemini + Google AI Overviews | Skews recent, especially on news queries | Reddit + YouTube + Reuters + Forbes + FT + Google-owned properties |
Two things stand out.
Claude prefers analytical journalism over fast-cycle news outlets. The publications it leans on, NYT, The Atlantic, The New Yorker, The Economist, are the ones that publish long, structured arguments and keep them online for years.
Claude does not penalize content for being older the way ChatGPT and Gemini do. The 36% versus 56% gap is large. It says Claude is comfortable citing a 3-year-old explainer or case study if the structure and reasoning are strong.
Claude is not a news engine. It is a reasoning engine with an old-content tolerance.
Why the wedge exists in the model itself
The recency gap is not random. Three structural factors line up to produce it.
Reason #1: Anthropic optimizes Claude for analytical reasoning, not recency
Claude's design priorities are consistent with what we see in the citation data. Anthropic positions Claude for long-context reasoning, structured argument, and document analysis. Those are the use cases Claude is benchmarked on internally and externally.
A recency-biased citation engine fights that positioning. An evergreen-tolerant one supports it.
Reason #2: Claude's training corpus weights long-form sources heavily
Public research on LLM citation behavior, including the Profound longitudinal study referenced inside the 5W Index and the Evertune ChatGPT structure analysis, shows that models cite sources that look like the kinds of documents they were trained on most.
Claude's preference for The Atlantic, The New Yorker, and The Economist mirrors this. Those publications produce the long, structured pieces that look most like the high-quality long-context examples Anthropic trains on.
Older B2B explainers that are well-structured share that shape. They survive into Claude's citation pool for the same reason.
Reason #3: Claude's web search runs on a smaller, more curated retrieval index
We covered this in Claude web search is no longer a side surface. Claude's retrieval pool is smaller and more selective than ChatGPT's. A smaller pool tolerates older, well-structured documents because the alternative is a thinner shortlist.
ChatGPT, with a larger retrieval surface, can afford to weight recency more aggressively. Claude cannot.
What this means for B2B brands with thin recent press
Most B2B SaaS marketing teams have the same problem. The press team produces 4-6 placements a year. The blog publishes once or twice a month. The case study library has 8 strong pieces, half of which are 18-36 months old. The product documentation runs to several hundred URLs and gets surface updates but not full rewrites.
In a recency-weighted model, that asset profile underperforms. ChatGPT will skip the 2-year-old case study in favor of a 3-month-old TechCrunch piece a competitor placed last quarter.
In Claude, the same case study can carry full weight. Claude does not assume older equals worse.
The wedge for B2B in 2026:
- •Your case study library is a Claude asset. Even the older pieces.
- •Your pillar explainers are a Claude asset. Even the ones from 2023 and 2024 that are still structurally sound.
- •Your product documentation is a Claude asset. Older docs that explain "what is X" and "how does Y work" are still being cited if the structure is clean.
- •Your press placements from 18-36 months ago are still active inside Claude. Old wins are not dead wins.
ChatGPT will not reward those assets the same way. Claude will.
The contrast with ChatGPT, in plain terms
The two platforms ask different questions of a candidate citation source.
ChatGPT asks:
- •Was this published recently?
- •Does this match a current news event?
- •Is this on a high-frequency citation domain like Reddit, Wikipedia, or Forbes?
- •Does this answer in a short, declarative passage?
Claude asks:
- •Is the argument structured and complete?
- •Does the source publication have analytical authority?
- •Is the reasoning still valid, regardless of publication date?
- •Can a clean, full passage be extracted?
The same brand asset can fail one filter and pass the other. A 2024 case study with strong structure but no recent press cycle has a near-zero ChatGPT citation probability and a meaningful Claude citation probability.
How to capture Claude's evergreen citation pool
If you accept the recency gap as real, the implementation playbook is concrete. Five steps, in order.
Step 1: Inventory your evergreen library
Pull every blog post, case study, explainer, white paper, and major documentation page published in the last 36 months. Tag each one for structural quality, not date.
What you are looking for:
- •A clear question or thesis in the H1
- •Subheadings that read as complete claims, not labels
- •A direct answer near the top
- •Named proof points, numbers, and quotes
- •An identifiable author and updated last-reviewed date
- •No broken links or dead references
A 2-year-old asset that passes those filters is a Claude candidate. A 3-month-old asset that fails them is not.
Step 2: Run a Claude-specific prompt audit
Pick the 30 buyer prompts most central to your category. Run each one in Claude's web search and record:
- •Did Claude cite you?
- •If not, who did Claude cite, and what is the publication date of that source?
- •What share of Claude's citations on your prompts are 12+ months old?
That last number is the one that matters. If 50%+ of Claude's citations on your category prompts are older than 12 months, the evergreen wedge is open in your space. Most B2B categories we have audited fall in the 50-70% range.
For a deeper procedure, see our guide on how to run an AI visibility audit.
Step 3: Refresh the structurally strong evergreen pieces
Take the 10-15 strongest evergreen assets from Step 1 and refresh them. Refresh, not rewrite.
Refresh means:
- •Update the dates and any numbers that have changed
- •Add a
lastUpdatedfield in the page metadata - •Tighten subheadings so each one reads as a complete claim
- •Add or expand the direct-answer block at the top
- •Add structured data: Article, FAQPage, and HowTo where appropriate
- •Patch any broken external citations
- •Add 2-3 new internal links to related pillar pieces
A refresh keeps the URL stable. Claude already knows that URL exists. A rewrite under a new URL throws away the citation history.
Step 4: Strengthen the long-form publications around you
Claude leans on long, analytical journalism. The 5W Index names NYT, The Atlantic, The New Yorker, and The Economist. For most B2B SaaS brands, those are not realistic placements.
The transferable lesson is to target the analytical publications in your category. For B2B SaaS, that means MIT Sloan Review, Harvard Business Review, IEEE publications, the long-form arms of trade press, and the analytical newsletters in your space. A single MIT Sloan Review byline carries more Claude weight than 10 short news placements.
Step 5: Track Claude separately in your AI visibility dashboard
If your reporting blends Claude with ChatGPT under "AI search citations," the recency-gap signal disappears.
Split Claude into its own row. Track:
- •Claude citation count, weekly
- •Claude citation share of category
- •Average age of Claude-cited URL
- •Top 10 Claude-cited sources in your category
That last metric is the early-warning system. The day a competitor's old case study starts showing up consistently inside Claude is the day they have figured out the wedge.
For more on which AI platform to optimize for first, see Which LLM should you optimize for and How to get cited by ChatGPT, Claude, Perplexity, and Gemini.
Stop letting your strongest evergreen content sit unused inside Claude.
Cite Solutions runs platform-specific GEO audits that surface where Claude is already citing your library, where it is citing competitors instead, and what to refresh first.
Book a Discovery CallA few honest caveats
The recency gap is real, but it is not absolute.
Claude does cite recent sources when the prompt is time-sensitive. Pricing pages, product release notes, and news-driven queries still favor fresh content even on Claude. The 36% number is a journalism average across all queries, not a guarantee that every query will pull older content.
Claude's citation behavior also drifts. The citation drift research we cover shows that source pools shift inside single months. The 36% recency figure is a snapshot from the period 5W analyzed, August 2024 through April 2026. It will move.
And evergreen tolerance is not the same as evergreen rewards. Claude tolerates older content. It still rewards good structure. A 4-year-old asset with weak structure does not become citation-worthy because Claude is patient with age.
The point is that age alone is not the disqualifier on Claude that it is on ChatGPT. That changes what is worth investing in.
FAQ
Why does Claude cite older content more than ChatGPT?
Claude is positioned as a reasoning engine, runs on a smaller and more curated retrieval index, and is trained on a corpus that weights long-form analytical sources heavily. The 5W Index found that only 36% of Claude's journalism citations come from the past 12 months versus 56% for ChatGPT, which is consistent with Claude's preference for structurally strong, analytically dense sources regardless of date.
What types of B2B content does Claude cite most often?
Long-form explainers, structured case studies, well-organized product documentation, analytical journalism from outlets like The New York Times, The Atlantic, The New Yorker, and The Economist, and research-backed pillar content. The shared thread is structural depth and clear reasoning, not recency.
Should I refresh my old blog posts to improve Claude visibility?
Yes, but refresh, not rewrite. Keep the URL stable, update the dates and numbers, tighten subheadings into complete claims, expand the direct-answer block at the top, and add or fix structured data. Replacing a URL throws away the citation history Claude has already built around that page.
How do I track Claude citations separately from ChatGPT?
Run a fixed list of buyer prompts in Claude's web search on a weekly cadence and log citation count, citation share of category, average age of cited URL, and the top 10 cited domains. Split that data into its own dashboard row. Bundling Claude with ChatGPT in a single AI search metric hides the recency-gap signal.
Is the Claude evergreen wedge going to last?
Probably not forever. Claude's behavior drifts in single-month cycles, and Anthropic can adjust retrieval weighting at any time. The wedge is real now, in mid-2026, and it is large. Brands that capture Claude's evergreen citation pool while it is still under-contested will hold a measurable advantage even after the gap narrows.
What to do this week
If you only do one thing, run the Step 2 prompt audit. Pick 30 buyer prompts. Run them in Claude. Record the publication dates of every cited source.
If 50%+ of those citations are older than 12 months, the wedge is open in your category. Your refresh queue should start with the 10-15 strongest evergreen assets you already have, not with new content you have not yet written.
The sites that get cited by Claude in 2026 are not the ones publishing the most. They are the ones whose old work is still structurally sound.
Continue the brief
How 15 Sites Decide B2B SaaS AI Visibility
5W's new index synthesizes 680M citations across ChatGPT, Claude, Perplexity, Gemini and AI Overviews. 15 domains hold 68%. B2B SaaS targets almost none.
Claude for Excel Is Live. Will It Cite You?
Anthropic shipped Claude inside Excel, Word, and PowerPoint. Customer-internal documents are now a citation surface most B2B SaaS teams ignore.
Anthropic's $1.5B Services JV Is a Claude GEO Event
Anthropic, Blackstone, Goldman Sachs and Hellman & Friedman just spun up a $1.5B services firm aimed at PE-owned mid-market. Here is the GEO read.
Framework
Learn the CITE framework behind our GEO and AEO work
See how Comprehend, Influence, Track, and Evolve turn AI visibility into an operating system.
Services
Explore our managed GEO services and AEO execution model
Audit, prompt discovery, content execution, and ongoing monitoring tied to AI search outcomes.
GEO Agency
See what a managed GEO agency should actually do
Compare real GEO operating work against generic reporting or tool-only approaches.
Audit
Start with an AI visibility audit before execution
Understand prompt coverage, recommendation gaps, source mix, and where competitors are winning.