What happened
ChatGPT changed how it searches the web, and most brands haven't noticed yet.
Peec AI analyzed 20 million query fan-outs (QFOs) between October 2025 and January 2026. The finding: the average word count per fan-out roughly doubled, from about 6 words to about 12, with some weeks peaking at 16 words.
That sounds like a technical footnote. It isn't. For anyone working on Generative Engine Optimization (GEO), this is one of the most significant shifts in how ChatGPT retrieves and cites content.
What's a query fan-out, and why should you care?
When someone asks ChatGPT a question, it doesn't just pass the raw prompt to a search engine. It breaks the question apart into multiple search queries (fan-outs) each designed to pull a specific piece of information.
A prompt like "What's the best CRM for a 50-person B2B company?" might generate queries like:
How query fanout works
User prompt
"What is the best CRM for a 50-person B2B company?"
AI synthesizes answer from retrieved sources
- •"best CRM tools B2B mid-market 2026"
- •"CRM comparison 50 person company features pricing"
- •"B2B CRM reviews enterprise vs SMB"
Each of those fan-outs retrieves different sources. The sources that get retrieved are the ones that get cited in the final answer.
ChatGPT query fan-out word count
Average words per fan-out query, Oct 2025 to Jan 2026
Why the doubling matters
Here's the shift: when fan-outs were 6 words, they were broad. "Best CRM tools 2026." Lots of generic content matched that query. Your standard listicle had a shot.
At 12+ words, the queries are specific. "Mid-market B2B CRM comparison pipeline management integration pricing." Generic content doesn't match anymore. The AI is looking for content that answers precise sub-questions.
This creates two distinct effects:
Winners: Brands with detailed, specific content like comparison pages with actual feature breakdowns, pricing data, use-case analysis. They are getting cited more because they match these longer, more precise queries.
Losers: Brands relying on broad "Best X tools" listicles are losing citations because the queries have gotten too specific for generic content to match.
The trend is global
Peec AI's analysis looked at five countries: Germany, UK, Singapore, Thailand, and the US. The doubling trend was virtually identical across all of them, regardless of language. German compound words, Thai script, English. Same pattern everywhere.
This isn't a regional algorithm tweak. It's a fundamental change in how ChatGPT processes user intent.
Another interesting finding: ChatGPT searches in English
Even when users ask questions in other languages, ChatGPT frequently generates English-language fan-outs. A German user asking about CRM tools in German gets English search queries behind the scenes.
For international brands, this means English-language content gets cited globally, not just in English-speaking markets. Your English comparison page might be getting cited to users in Tokyo, Berlin, and Sao Paulo.
What this means for your AI visibility strategy
Three things to act on right now:
1. Get specific or get invisible. Broad, high-level content is losing ground fast. Every key page on your site needs to answer specific sub-questions. Pricing for particular company sizes, feature comparisons for specific use cases, integration details for particular tech stacks.
2. Think in passages, not pages. ChatGPT retrieves at the passage level. A 40-60 word block that directly answers a specific question is more valuable than a 3,000-word article that vaguely covers the same topic. Our guide on how passages beat pages for AI citation covers how to structure your content so each section is a self-contained answer.
3. English content has global reach. If you're only creating localized content for non-English markets, you're missing citations. AI models are pulling English sources for non-English queries. Make sure your core content exists in English, even if you also publish in other languages.
Is your content specific enough for ChatGPT's new queries?
We'll run your brand through ChatGPT's fan-out queries and show you exactly where you're getting cited, and where you're getting skipped.
See Your Fan-Out CoverageThe number of fan-outs stayed flat
One detail worth noting from Peec AI's data: while the word count per fan-out doubled, the number of fan-outs per prompt stayed roughly constant. ChatGPT isn't issuing more searches. It's making each search more precise.
This is the AI getting better at understanding what users actually want. And it means the bar for "good enough to get cited" just went up. Knowing how to select the right prompts for LLM tracking becomes even more important when each fan-out is this precise.
What we're watching next
Query fan-outs are one of the few observable signals in AI search. For a broader look at how to optimize for ChatGPT search, we cover the full platform-specific playbook. We'll be tracking whether this trend continues, whether other models (Gemini, Perplexity, Claude) show similar patterns, and what content types perform best against longer, more specific queries.
Your competitors might already be winning these specific queries
ChatGPT just raised the bar. Find out if your content makes the cut, or if you're losing citations to brands with better passage-level answers.
Get Your AI Visibility Audit