Momentum is a B2B SaaS company building GTM tooling for sales and operations teams. Their Head of GTM Growth, Jonathan Kvarfordt, had a problem that maps to most B2B SaaS marketing departments right now: Peec AI legacy SEO tools "offered no ability to gauge how frequently LLMs like ChatGPT, Gemini, and Perplexity were citing Momentum."
He ran the analysis. The results were poor.
Then the team optimized 100 articles. Each one targeted a specific set of prompts their buyers were actually running in AI tools. They aligned the content format to the structures that AI models cite most often. They used Peec AI analytics to track citation behavior at the prompt level, not just the keyword level.
The result: 10 times the AI search visibility across all major LLMs. Sessions from AI search doubled. Some articles moved from no mentions to top-three positions on key prompts. And Momentum began outranking Salesforce and Zapier on dozens of queries.
That last part is what makes this worth examining. Salesforce has a marketing budget that dwarfs most venture-backed startups. In traditional Google search, that gap is nearly impossible to close. In AI search, a team with the right methodology closed it in a month.
Why AI search doesn't work like Google search
Understanding why this result is possible requires sitting with one uncomfortable data point.
EMGI Group analyzed 150 SaaS companies across 120 keywords. 44% of brands in Google's top 10 get zero ChatGPT citations. The inverse is equally striking: 81% of the brands ChatGPT does cite don't rank in Google's top 10.
These two platforms are running separate authority evaluations on the same queries, and they reach different conclusions most of the time.
The correlation data explains why. Topical authority has a 0.76 correlation with AI citation frequency. Organic traffic has a 0.23 correlation. A brand that owns a category in depth, even without massive domain authority or backlink counts, earns more AI citations than a brand generating traffic but not writing specifically about the questions its buyers ask.
For challenger brands, this is the opening. AI citation is not proportional to marketing budget. It's proportional to how precisely your content answers the specific questions buyers are running through AI tools.
Peec AI — Momentum Case Study, April 2026
Prompt-specific optimization: the three-step cycle
GTM SaaS platform. Results measured across ChatGPT, Gemini, Perplexity, and Claude. ~30 day window.
STEP 01
Audit
Map which prompts buyers use to research your category. Identify exact queries where you're invisible.
STEP 02
Optimize
Create prompt-specific articles targeting each gap. Align to LLM-preferred formats: listicles, comparisons, direct answer blocks.
STEP 03
Measure
Track share of model by prompt, not traffic. Count how often your brand appears when buyers ask your category questions.
Momentum results after one sprint
10x
AI visibility across ChatGPT, Gemini, Perplexity, Claude
2x
Sessions from AI search within one month
40+
Prompts where Momentum outranked Salesforce and Zapier
100
Prompt-specific articles created in the optimization sprint
Why incumbent brands are less protected than they look
Earlier EMGI research found that challenger brands in B2B SaaS are cited only 22% of the time in AI search responses for their category. Dominant brands do better. But the mechanism is not the same as Google's PageRank. The gap isn't primarily about domain authority.
AI models don't evaluate your site as a whole. They extract specific passages to answer specific sub-queries. When a user asks "what's the best GTM tool for a 50-person sales team," ChatGPT fans that query out into sub-queries covering pricing, integration options, team size fit, and use cases. If Salesforce's content doesn't directly address the 50-person-sales-team angle but Momentum's does, Momentum wins that sub-query.
Incumbent brands often have strong content on what they do. They tend to have thinner content on the specific scenarios, comparisons, and question types that buyers use when they've already decided they need a tool and are choosing between options.
That's the gap Momentum targeted. Not "write more content about our product." Write specifically about the prompts their buyers were running in ChatGPT and Perplexity that weren't returning Momentum as an answer.
What Momentum actually did
The methodology has three components, each grounded in a specific data gap.
Prompt-level gap analysis. Kvarfordt used Peec AI to identify which prompts Momentum's buyers were running and where Momentum was invisible. This is a different analysis than keyword research. Keyword research identifies search terms. Prompt-level gap analysis identifies the conversational queries buyers are submitting to AI tools, with Momentum's citation rate per prompt as the output. The gaps were concrete: Momentum appeared in a small fraction of the prompts where it should have been competing.
Content aligned to specific prompts. The team created 100 articles. Each one was designed to appear in specific prompts, not just to cover a topic area broadly. They aligned to the content formats that AI models prefer most: listicle-style structured content, which accounts for 74.2% of all AI citations according to AirOps's 2026 dataset of 16,851 queries. And they used LLM-preferred content structures, specifically direct answers, comparison tables, and clear feature-by-feature breakdowns.
Tracking by prompt, not by traffic. Success was measured by share of model at the prompt level. Does Momentum appear when a buyer asks "best GTM software for sales ops teams?" Does it appear in the comparison responses? The goal was coverage of specific buyer prompts, not aggregate traffic growth.
Results appeared within approximately one month. 10x visibility across ChatGPT, Gemini, Perplexity, and Claude. AI-referred sessions doubled. And the prompt-level tracking showed Momentum appearing in positions where Salesforce and Zapier had been the default answers.
Want to know which prompts your buyers are running without finding you?
We run prompt-level gap analysis across ChatGPT, Gemini, Perplexity, and Google AI, then build the content that puts your brand in the answer. The Momentum result is the target outcome.
Book a Discovery CallThe content structure choices that drive citations
The Momentum team aligned to LLM-preferred formats. Here's what the research says about what those are.
AirOps's 2026 State of AI Search, based on 16,851 queries and over 50,000 responses, found that pages ranked first in Google are cited by ChatGPT 58.4% of the time, versus 14.2% at position ten. Position one gets disproportionate citation. Content quality and topical precision, not broad SEO strength, determine whether you hold that first position for AI-relevant queries.
The other structural factors from the same dataset:
Optimal word count for ChatGPT citations runs from 500 to 2,000 words. Pages above 5,000 words underperform even pages under 500 words. Momentum's focused-length approach fits this range.
Pages 30 to 89 days old perform at the highest citation rates. Too new (under 30 days) and too old (over two years) both get penalized. A constant refresh cycle, rather than a one-time content push, maintains citation performance over time.
Headings that directly match the query being asked return a 41% citation rate, versus 30% for loosely related headings. This is why prompt-specific content works: when the heading answers the question directly, the model can extract the passage and cite it with confidence.
JSON-LD schema correlates with a 38.5% citation rate, versus 32% for pages without it. The difference isn't enormous, but at scale it compounds.
FAQ schema specifically increases AI citations by 350%, based on Otterly's controlled experiment comparing 2,379 citations with schema against 529 without it.
Sequential heading hierarchies, meaning H2 sections that build logically through a topic, produce 2.8 times higher citation likelihood than pages with inconsistent or missing heading structure.
None of these are expensive. They're formatting and content architecture choices. Salesforce doesn't automatically have them. If Momentum's 100 articles follow this structure and Salesforce's older content doesn't, Momentum wins the citation for that specific query.
The comparison content multiplier
One data point that doesn't get enough attention.
Kevin Indig's Growth Memo research analyzed 3,981 domains across 115 prompts and found that evaluation and comparison content generates 30 times more brand name mentions than informational content. This matters because 62% of AI citations are ghost citations: the domain gets a source link, but the brand name never appears in the response text. You're cited but not named.
Comparison content changes that ratio. A page that positions Momentum against Salesforce, Outreach, and Apollo by category, use case, team size, and pricing, with Momentum's name appearing explicitly in the analysis, produces a fundamentally different citation signal than a page explaining what GTM software is.
This is why the format choice matters beyond the citation rate. Listicles and comparison guides earn both the source citation and the brand name mention. Informational content tends to earn the citation without the name. For brand visibility in AI-generated vendor comparisons, the evaluation format is the one that counts.
Content velocity and the Salesforce gap
Brandi AI's February 2026 research found that brands publishing 12 or more optimized pieces per month gained AI visibility up to 200 times faster than brands publishing four per month. Momentum did approximately 100 articles in one month. That pace isn't required, but it shows the relationship between content velocity and speed of results.
Salesforce publishes content at scale, but not necessarily at the prompt specificity that AI citation requires. A smaller brand that publishes 12 highly targeted articles per month, each one answering a buyer prompt directly, can outpace a large brand publishing 100 broadly optimized articles per month.
The critical variable is not volume. It's prompt alignment. Each article needs to address a specific question buyers are running in AI tools. One article answering 10 vague variations of a topic earns fewer citations than 10 articles each directly answering one specific prompt.
For teams that can't sustain 100 articles in a month, the implication is about targeting rather than volume. The Momentum result came from 100 articles because they had 100 prompt gaps to close. A brand with 20 critical prompt gaps needs 20 focused articles. Prompt-level gap analysis tells you where to focus.
Not sure which prompts to target first?
We identify the highest-value prompt gaps in your category across all major AI platforms, prioritize by buyer intent and citation opportunity, and build the content roadmap that gets your brand into the answer.
Book a Discovery CallFAQ
How long does it take to see results from prompt-specific optimization?
Momentum saw measurable results within approximately one month of publishing 100 targeted articles. Brands publishing fewer articles should expect results to take longer, scaled roughly to content production pace. AirOps freshness data shows that pages need 30 days to enter the optimal citation window, so new content won't be cited immediately. A team publishing 8 to 12 articles per month should expect two to three months for measurable share-of-model changes.
Does this approach work across Perplexity and Gemini, or just ChatGPT?
Momentum's results included gains across ChatGPT, Gemini, Perplexity, and Claude simultaneously. That said, each platform has different content preferences. Perplexity relies heavily on community sources like Reddit. Gemini has moved toward structured reference content since early 2026. Only 11% of domains are cited by both ChatGPT and Perplexity, so cross-platform citation overlap is limited. Prompt-specific structured content tends to work across platforms because it matches how AI models parse passages, but the off-site signals differ by platform and require separate optimization.
What if an incumbent has much stronger domain authority?
Domain authority predicts ChatGPT citations with a correlation of 0.18. Topical authority has a 0.76 correlation. A brand with 100 pages covering a category from 40 angles, with specific data and direct answers on each, will earn more AI citations in that category than a brand with higher domain authority and 20 pages covering the same space generically. The Salesforce example from the Momentum case study is the clearest available proof of this.
What content format should I start with?
Listicle-format content, meaning "Top N" comparisons and ranked category guides, accounts for 74.2% of AI citations according to AirOps. The list must be structured around category evaluation, not self-promotion. Self-promotional listicles earn nearly zero citations. A comparison guide covering five tools in your category with stated criteria and specific data per entry is the format that performs consistently.
How do I measure whether this is working?
Traffic is the wrong metric for this. Share of model measures how often your brand appears when your category is discussed, across a defined set of tracked prompts. Platforms like Peec AI, Profound, and Otterly track this at the prompt level. The metric that maps to pipeline: for the 10 prompts most likely to precede a buying decision in your category, how often does your brand appear? That's the number to move.
What the Momentum result actually means
There are two ways to read it.
One reading: 10x visibility in a month sounds like a hype stat. It likely reflects a low starting point. Momentum was nearly invisible in AI search before the optimization program. Going from near-zero to visible is easier than going from visible to dominant.
The other reading: that low starting point is where most B2B SaaS companies are today. The EMGI data shows 23% of brands score zero across all AI platforms. 44% of Google top-10 SaaS brands have zero ChatGPT citations. A large share of the market is at or near Momentum's starting point.
The methodology doesn't require a large team. It requires knowing which prompts your buyers are running, creating content that directly answers those prompts, and tracking results by prompt rather than by traffic. Most B2B marketing teams have those resources. Most haven't started yet.
Salesforce won't adjust its content strategy because a challenger brand is outranking it on 40 prompts. That's not how companies with that much existing momentum operate. Which means the window stays open longer than it would in traditional search.
Framework
Learn the CITE framework behind our GEO and AEO work
See how Comprehend, Influence, Track, and Evolve turn AI visibility into an operating system.
Services
Explore our managed GEO services and AEO execution model
Audit, prompt discovery, content execution, and ongoing monitoring tied to AI search outcomes.
GEO Agency
See what a managed GEO agency should actually do
Compare real GEO operating work against generic reporting or tool-only approaches.
Audit
Start with an AI visibility audit before execution
Understand prompt coverage, recommendation gaps, source mix, and where competitors are winning.