If your content team is rewriting URL slugs to win AI citations, the data says you are wasting time.
On May 7, 2026, Otterly published The URL AI Citation Study 2026. They analyzed 1,028,959 unique URLs and 1,932,200 individual citation instances across six AI platforms in a 24-hour window. ChatGPT, Google AI Overviews, Google AI Mode, Perplexity, Gemini, and Microsoft Copilot.
It is the largest URL-level dataset on AI citations published in 2026. The headline finding lands hard.
URL length, path depth, hyphen count, and domain length all show near-zero correlation with whether a page gets cited. What matters is page type. Guides earn 80% more citations than pricing pages. Clean URLs without query strings earn 24% more than tracking-fragmented ones. The top 15.8% of URLs capture half of all citations.
This post breaks the findings down, explains why the SEO-folklore advice on URL structure transfers poorly to AI search, and gives you a five-step fix.
URL AI citation study — Otterly, May 7, 2026
1,028,959 URLs analyzed across 6 AI platforms
1.93 million citation instances, 24-hour window. ChatGPT, Google AI Overviews, Google AI Mode, Perplexity, Gemini, Microsoft Copilot.
URL structure correlation with citations (Pearson r)
All five attributes register as statistically negligible. A 100-character URL is no more or less likely to be cited than a 40-character URL.
Average citations by page type
Guides earn 80% more citations than pricing pages. Largest categorical effect in the study.
Clean URLs vs URLs with query strings (n=430,628 query-bearing)
UTM and tracking parameters fragment a single canonical URL into hundreds of variants. Each carries 24% fewer citations than a clean version.
Top 15.8% of URLs capture 50% of citations. Median citations per URL: 1. Maximum on a single URL: 965. AI citation distribution is power-law, not normal. Concentration beats spread.
What Otterly actually measured
Most URL-structure advice in 2026 still leans on Google SEO conventions: short URLs, hyphens between words, shallow path depth, descriptive slugs. Those rules came from a decade of ranking-factor research on ten-blue-links search.
AI search is a different retrieval problem. Models read body text, structured data, and link graphs. The URL slug is one of the weakest signals in the stack.
Otterly tested this directly. They pulled correlation coefficients for five URL attributes against citation counts on a 1M+ URL sample.
| URL attribute | Correlation (r) | Practical effect |
|---|---|---|
| URL length | -0.025 | Negligible |
| Domain length | -0.007 | Negligible |
| Path depth | +0.002 | Negligible |
| Hyphen-word count | -0.013 | Negligible |
| Domain dots | +0.039 | Negligible |
A 100-character URL is no more or less likely to be cited than a 40-character URL. A page at /blog/category/sub-category/post-slug has the same citation odds as a page at /post-slug. None of the five attributes registered as more than statistical noise.
That is the diagnosis. URL structure is not the lever.
Five things URL structure does not do
The patterns that SEO consultants have charged for over the past decade do nothing for AI citations. Otterly tested each one against the 1M-URL sample.
Reason #1: URL length has no effect on citation rate
Correlation: -0.025. The longest URLs in the sample carry no penalty. The shortest carry no advantage. If your team has a backlog ticket to compress slugs from how-to-build-a-content-strategy to content-strategy, close the ticket. There is no AI citation upside in either direction.
Reason #2: Path depth does not predict citations
Correlation: +0.002. Deeply nested URLs at /resources/guides/category/topic/post get cited at the same rate as flat URLs at /post. Path depth was a stronger signal for early Google ranking. It transfers into AI search as noise.
Reason #3: Hyphen counts in slugs are statistically dead
Correlation: -0.013. Whether your slug uses two hyphens or seven, citation behavior does not change. The "hyphens-help-readability" rule still matters for human users. It does not appear in AI retrieval signals.
Reason #4: Question-format URLs do not get a citation lift
Otterly tested URLs containing how-to, what-is, and similar question patterns. Average citations: 1.8. Non-question URLs: 1.9. The conventional wisdom that AI engines reward question-format slugs is not supported in the data. AI extracts from headings and body text, not from URL slugs.
Reason #5: Year-in-URL has zero observable effect
URLs containing 2026, 2025, or any year token were cited at the exact same rate as year-free URLs. The freshness signals AI engines use come from page content, schema dates, and site signals. Not the slug.
Optimizing URL slugs for AI is folklore. The 80% citation gap between guide pages and pricing pages is the actual lever.
Three things that actually drove citations
The structural URL attributes registered as noise. The categorical attributes lit up. Three findings did the heavy lifting.
Finding #1: Page type opens an 80% citation gap
This is the largest single signal in the study. Pages classified as guides earned an average of 2.7 citations. Pages classified as pricing earned 1.5. The gap is 80% on a per-URL basis.
| Page type | Avg citations | vs baseline (1.9) |
|---|---|---|
| Guide | 2.7 | +42% |
| Blog | 2.0 | +5% |
| Help | 2.0 | +5% |
| Docs | 1.6 | -16% |
| Product / Service | 1.6 | -16% |
| Pricing | 1.5 | -21% |
Most B2B SaaS sites are heavy on product and pricing pages and thin on guides. That structural mismatch is now quantifiable. If your content portfolio is 40% product, 30% pricing, 20% blog, and 10% guides, you have built a site that AI cites at the lower end of the spectrum.
This finding lines up with what Evertune reported on May 1, 2026, in their 33,000 ChatGPT-cited URL analysis. Half of all ChatGPT citations went to listicles. The median cited page was 941 words with 4 H2 headers and 15 external links. Guides carrying that structural profile dominate the citation pool.
Finding #2: Clean URLs earn 24% more citations than query-string URLs
Otterly isolated 430,628 URLs that carried query strings. Clean URLs averaged 2.1 citations. URLs with query strings averaged 1.6.
The mechanism is straightforward. UTM parameters, fbclid, gclid, and HubSpot tracking tokens fragment a single canonical URL into hundreds or thousands of variants. Each variant carries a fraction of the citation share that a clean URL would earn. The signal gets diluted across the variants.
For most B2B SaaS teams, this is a one-day fix at the CDN or CMS layer. Strip tracking parameters from canonical URLs. Set up rel=canonical to point at clean versions. Audit Slack, email signature, and social-post sharing flows to make sure tracking parameters are stripped before sharing.
Finding #3: Top 15.8% of URLs capture 50% of all citations
The citation distribution is power-law, not normal. The median URL gets 1 citation. The top 15.8% capture half the citations. The single most-cited URL in the sample received 965 citations.
AI citation strategy is concentration, not spread. A handful of well-built guide pages will outperform a wide blog-post sprawl.
This is the strongest argument for a flagship-page strategy. Five well-resourced guide pages, refreshed quarterly, will outperform forty thin blog posts published once and abandoned. The math is in the distribution.
Your URL structure is fine. Your page-type mix is the problem.
We audit your content portfolio against the citation-density profile in the Otterly data, identify the page-type gaps that are costing you AI citations, and rebuild around guides that match the structural profile of cited content.
Book a Discovery CallWhy the SEO-folklore advice transferred poorly
The URL-optimization rulebook came from Google ranking research between 2012 and 2020. Short slugs, descriptive keywords, shallow paths, and hyphenated word separators all correlated with rankings in studies from Backlinko, Ahrefs, and SEMrush at the time.
Two things changed.
Traditional SEO asks:
- •What keyword should this URL target?
- •How short can the slug be?
- •How shallow is the path depth?
- •Does the slug contain the primary keyword?
AI search asks:
- •What page type is this?
- •Does the body text contain extractable passages?
- •Is the content depth in the 500 to 2,000 word range?
- •Does the page link out to primary sources?
- •Is the canonical URL clean of tracking parameters?
The AI retrieval pipeline reads body content and structured data. URL slugs are a deprioritized signal in the stack. The folklore that produced ranking gains in 2018 produces no measurable citation lift in 2026.
This pattern is not unique to URL structure. It is the third "SEO best practices that fail for AI" experiment Otterly has run in 30 days. Markdown-mirror pages produced zero citations. Image alt text is invisible to AI retrieval. Now URL structure joins the list. The pattern is consistent: signals that helped Google rankings are not the signals AI engines use to pick citations.
How to fix this in five steps
The findings are clean. The fix is operational. Five steps, all implementable inside one quarter.
Step 1: Stop running URL-rewrite projects
If your team has an active backlog item to compress slugs, restructure path depth, or add year tokens to URLs for AI search reasons, kill the project. The data says it does nothing. Redirect the engineering hours to content production or query-string hygiene.
Step 2: Run a query-string audit
Pull a list of every URL on your site that has been shared with tracking parameters. Most B2B SaaS analytics stacks expose this data inside Google Analytics, Mixpanel, or Amplitude. Identify the top 50 most-shared URLs that carry UTM, fbclid, gclid, or HubSpot tracking tokens.
For each, configure your CDN or CMS to canonicalize to the clean version. Strip tracking parameters before the URL is shared in Slack messages, email signatures, and social posts. The 24% citation lift is a one-day implementation gain.
Step 3: Map your portfolio by page type
Build a simple inventory of every URL on your site, classified by page type: guide, blog, help, docs, product, pricing. Most B2B SaaS sites carry too much weight on the bottom three categories.
If your guide page count is under 10% of total content, you have a structural problem. The Otterly data says guides outperform every other page type by a wide margin. Pricing pages, which most B2B SaaS marketing teams over-invest in, sit at the bottom of the citation pool.
Step 4: Build more guides
A guide is not a blog post. It is a long-form, structurally dense, externally referenced reference document. The Evertune profile says the median cited page is 941 words, has 4 H2 sections, and links out to 15 external sources.
Plan for five flagship guides per quarter. Each one should be:
- •1,500 to 2,500 words of body content
- •Organized with 4 to 6 H2 sections, each answering a discrete question
- •Carrying 10 to 20 external links to primary sources
- •Refreshed every 60 to 90 days
Five guides per quarter, refreshed and maintained, will outperform forty thin posts published once. The citation distribution rewards concentration.
Step 5: Fix link hygiene at the share layer
Train your team to strip tracking parameters before they share URLs. Set up automation in your CMS or CDN that redirects all parameter-bearing variants of a canonical URL to the clean version. Audit your email signature templates, Slack auto-share flows, and social-post sharing tools to make sure UTMs are not appended by default.
This is not a glamorous workstream. It is a one-week effort that captures a 24% citation lift on every shared link.
How this fits with the other AI citation evidence
The Otterly URL study is one piece of a larger picture. The structural profile of AI-cited content is now well documented across multiple independent datasets.
Evertune analyzed 33,000 ChatGPT-cited URLs and found the median page is 941 words with 4 H2 headers, 28 internal links, 15 external links, and 10 images. AirOps tested 16,851 queries and 50,553 responses and found the optimal word count band is 500 to 2,000 words. The 5W AI Visibility Index reported earned media at 84% of AI citations across 25 million links analyzed.
The pattern across datasets is consistent. AI citations go to focused, structurally dense content of guide-like character, with body content that links out to primary sources and lives at clean canonical URLs. The structural profile is identifiable and reproducible.
URL slugs are not in the profile. They never were.
FAQ
Does URL length affect AI citations?
No. Otterly's analysis of 1,028,959 URLs found a Pearson correlation of -0.025 between URL length and citation count, statistically negligible. A 100-character URL has the same citation odds as a 40-character URL. URL length was an early Google ranking signal that did not transfer into AI search retrieval.
Should I add the year to my URL slug for AI search?
No. Otterly's data shows URLs containing year tokens get cited at the same rate as year-free URLs. The freshness signals AI engines use come from body content, schema dates, and lastUpdated metadata, not from the slug. Adding 2026 to your URL produces no measurable citation lift.
Why do guide pages outperform pricing pages by 80% in AI citations?
Guide pages carry the structural profile that AI retrieval pipelines extract from. They are typically 1,500 to 2,500 words, organized into discrete H2 sections, and link out to primary sources. Pricing pages are short, conversion-focused, and rarely contain extractable passages. The 80% gap is a content-density and structural-format gap, not a URL gap.
What does "clean URL" mean in the Otterly study?
A clean URL is one without query string parameters. Tracking parameters like utm_source, fbclid, gclid, and HubSpot CTA tokens fragment a canonical URL into many variants. Otterly found that across 430,628 URLs with query strings, average citations dropped 24% versus the clean version. Stripping tracking parameters at the canonical layer recovers the lost citation share.
How many URLs capture most AI citations?
The distribution is power-law. The top 15.8% of URLs capture 50% of all citations. The top 20% capture 54%. Over half of URLs receive exactly one citation in a 24-hour window. The median is 1, but the maximum on a single URL was 965. Concentration on a small number of flagship pages produces more citation share than spreading effort across many thin posts.
The takeaway
If your AI visibility roadmap has a URL-optimization workstream, kill it.
The data is unambiguous. URL slugs do not predict citations. Page type does. Clean canonical URLs do. Structural content depth does. The five-step fix is portfolio-level work, not slug-level work. Five flagship guides per quarter, refreshed on a 60 to 90 day cadence, with clean canonical URLs and tracking parameters stripped at the share layer.
That is the program that captures AI citations in 2026. The URL slug is a side quest.
Audit your portfolio against the citation profile.
We map your content portfolio by page type, identify the structural gaps in your guide coverage, and run the query-string hygiene fix that captures the 24% citation lift Otterly documented. Five flagship guides per quarter, built to the citation profile.
Book a Discovery CallContinue the brief
Why Google Rankings No Longer Predict AI Citations
5W research shows the overlap between Google's top rankings and AI citations collapsed from 70% to under 20%. Here is what to do about it.
Why Claude Cites Older Content Than ChatGPT
Only 36% of Claude's journalism citations come from the past 12 months, versus 56% for ChatGPT. That recency gap is the cleanest evergreen wedge B2B has.
Will Claude's Multiagent Orchestration Cite You?
Anthropic shipped three Claude Managed Agents updates on May 7, with Netflix adopting Multiagent Orchestration. Here is the GEO implication for B2B brands.
Framework
Learn the CITE framework behind our GEO and AEO work
See how Comprehend, Influence, Track, and Evolve turn AI visibility into an operating system.
Services
Explore our managed GEO services and AEO execution model
Audit, prompt discovery, content execution, and ongoing monitoring tied to AI search outcomes.
GEO Agency
See what a managed GEO agency should actually do
Compare real GEO operating work against generic reporting or tool-only approaches.
Audit
Start with an AI visibility audit before execution
Understand prompt coverage, recommendation gaps, source mix, and where competitors are winning.