On February 5, 2026, Perplexity launched Model Council for its Max subscribers. The feature runs three frontier AI models, GPT-5.4, Claude Opus 4.6, and Gemini 3.1 Pro, in parallel on every query. A fourth synthesis model then analyzes all three outputs, identifies where they agree, flags where they disagree, and highlights what each model contributed uniquely.
This is not a comparison tool. It is a research verification system, and it changes what it means for a brand to appear in a Perplexity answer at the top tier.
The citation implication is straightforward: a brand that exists in all three citation pools, OpenAI's training data, Anthropic's training corpus, and Google's Gemini citation index, appears as a high-confidence Council answer. A brand present in one or two pools surfaces in the "divergence" analysis, which may actually increase visibility as a contested option. A brand present in none is completely absent, regardless of how strong its presence is on any individual platform.
Perplexity's $200/month Max subscribers are among the most research-intensive AI users on any platform. When they run vendor evaluation queries, they now get three frontier AI opinions synthesized into a convergence analysis, not one. Every brand on a shortlist faces that check automatically.
How Model Council works
When a Max subscriber submits a query, Perplexity routes it to all three models simultaneously. Each model generates its own full response, drawing from its own training data and retrieval stack. GPT-5.4 pulls from OpenAI's training data and search layer. Claude Opus 4.6 draws from Anthropic's training corpus and its own web retrieval. Gemini 3.1 Pro pulls from Google's citation index, the same pool that feeds AI Overviews and AI Mode.
The synthesis model, which Perplexity calls the "chair," then does three things: it identifies convergence signals, the points all three models agree on; divergence signals, the points where models give conflicting information; and unique contributions, insights that only one model raised.
The output a user sees is the synthesized analysis, with explicit markers for what has multi-model support versus what remains contested.
Perplexity Model Council — February 2026, Max subscribers only
How Model Council evaluates brand citations
Three models run in parallel. A chair model synthesizes agreement and disagreement.
User query (Max subscriber)
"What's the best [category] tool for enterprise teams in 2026?"
GPT-5.4
OpenAI training data
Primary citation inputs
Wikipedia entries
Reddit threads
G2 reviews & press
Forum content
Claude Opus 4.6
Anthropic corpus
Primary citation inputs
Premium journalism
LinkedIn long-form
Structured reference content
Editorial sources
Gemini 3.1 Pro
Google citation index
Primary citation inputs
JSON-LD schema pages
AI Overviews sources
YouTube transcripts
FAQ-structured content
Chair model (synthesis layer)
Convergence
All 3 models agree — cited as high-confidence fact
Divergence
1–2 models disagree — flagged as contested or uncertain
Unique contributions
One model surfaces info others missed — noted explicitly
What happens to your brand in a Council answer
Present in all 3 citation pools
High-confidence convergence signal
All three models agree. Chair model flags it as a verified answer.
Present in 2 of 3 citation pools
Divergence flag — noted as contested
Council highlights the model that excluded the brand. Appears as a debated option.
Present in 0 citation pools
Completely absent from the answer
No convergence, no divergence note. The brand does not exist in this query.
The triple citation problem
The three models in Model Council are not pulling from the same source pool. They never were. That is the entire point.
Sill's analysis of 139 brands found that 91.6% of cited URLs appear on only one AI platform. The cross-platform citation overlap is near zero. Separate research found that only 11% of domains are cited by both ChatGPT and Perplexity.
Model Council makes that near-zero overlap directly visible to users. A brand that has invested heavily in ChatGPT citation optimization may have zero presence in Gemini's citation index, zero presence in Claude's training corpus, and appear as a gap or "noted only by one model" flag in the Council synthesis. The user sees the gap explicitly, which is worse than being absent from a single-model answer where no gap is shown.
The 5W AI Platform Citation Source Index 2026, published May 1, 2026 and drawing from 680 million citations across five platforms, confirms this structural divergence. The top 15 domains capture 68% of all AI citation share, but which 15 domains make that list varies significantly by platform. Claude specifically favors premium journalism sources. Perplexity prioritizes primary sources and B2B authority domains. ChatGPT is heavily weighted toward Wikipedia, Reddit, and editorial sites. Three different platforms, three different citation hierarchies.
For B2B SaaS brands, those differences are not minor variations. They require different content and off-site strategies for each pool.
What feeds each citation pool
Each of the three pools in Model Council has different primary inputs.
| Citation pool | Primary inputs | Key characteristic |
|---|---|---|
| GPT-5.4 / OpenAI training data | G2 reviews, Wikipedia, Reddit threads, press coverage | 65.5% of ChatGPT responses use training data, not live web search |
| Claude Opus 4.6 / Anthropic corpus | Premium journalism, LinkedIn long-form articles, structured evergreen content | Favors authoritative older sources over freshness signals |
| Gemini 3.1 Pro / Google citation index | JSON-LD schema pages, FAQ-structured content, internally linked topic clusters | 50% hallucination rate, down from 88% for Gemini 3 Pro |
ChatGPT's 65.5% training data reliance makes training data coverage the primary lever for OpenAI's share of the Council answer. G2 reviews, Wikipedia entries, and Reddit threads are the primary inputs. The Apollo.io case study is the clearest evidence of how quickly Reddit coverage scales: a single competitor comparison thread generated 3,000 additional citations in one week.
Claude's citation pool rewards different inputs. The 5W citation index data shows Claude reads only 36% of journalism citations from the past year, compared to 56% for ChatGPT. Evergreen authority sources matter more than recency. Long-form LinkedIn articles, coverage in premium editorial outlets, and structured reference content are the path into Anthropic's training corpus.
Gemini's pool is where the recent hallucination improvements matter most. Gemini 3.1 Pro's hallucination rate dropped to 50%, down from 88% for Gemini 3 Pro, according to the AA-Omniscience benchmark from Artificial Analysis (April 2026). That 38-percentage-point improvement means brands with structured, accurate content now get more reliable Gemini representations than they did six weeks ago. Brands with sparse or inconsistent coverage still face a 50% hallucination risk under the improved model, meaning Gemini may still invent details about them.
Most GEO programs optimize for one platform. Model Council runs all three at once.
We build multi-platform citation strategies that put your brand in ChatGPT's training data, Claude's citation pool, and Gemini's index simultaneously. Start with a citation gap audit across all three.
Book a Discovery CallWho uses Model Council
Model Council is available only to Perplexity Max subscribers, priced at $200/month or $2,000/year. The feature runs on web only with no mobile support as of launch.
That price point filters for a specific user profile. Perplexity's own documentation for Model Council cites $50,000-plus business decisions, M&A due diligence, and complex strategic analysis as the primary contexts where the feature adds value. These are not casual queries. The Max subscriber base skews toward professional researchers, analysts, and decision-makers running high-stakes information gathering.
That profile is the ICP for most B2B SaaS companies selling into mid-market and enterprise accounts. The person at a 500-person company running a Council query on "best CRM for B2B sales teams" is building a shortlist. The Council output, which brands appear as high-confidence convergence signals versus flagged gaps, will shape which vendors they investigate further and which they deprioritize before ever visiting a website.
Perplexity reached $305 million ARR as of April 2026 and processes 100 million queries per day. The Max tier is a fraction of that user base, but it contains the users most likely to be making large purchasing decisions with structured research processes. For B2B brands, Perplexity Max user density matters more than raw user count.
The Deep Research connection
Perplexity's Deep Research feature, which runs long agentic research sessions before producing a detailed answer, was upgraded to run on Claude Opus 4.6 for Max users in April 2026. Previously it ran on Opus 4.5.
The AA-Omniscience benchmark comparison: Claude Opus 4.6 has a 36% hallucination rate on citation-sensitive tasks. GPT-5.5 has an 86% hallucination rate on the same benchmark, according to Karo Zieminski's April 2026 analysis. For research-heavy sessions that produce detailed brand comparisons or category analyses, the model underlying Perplexity Deep Research now produces materially more accurate brand representations.
The GEO implication from this upgrade: brands with structured, factual content get better Deep Research representations now than they did a month ago. Brands with inconsistent or sparse training data coverage still face a 36% hallucination risk, meaning Perplexity Deep Research may still invent details about them. The improvement narrows the gap between well-covered and poorly-covered brands, but does not eliminate it.
As the Profound query behavior study published April 30, 2026 shows, Perplexity runs nearly identical sub-queries across repeated runs of the same prompt, with only 14% uniqueness across runs. A brand that achieves a citation in a standard Perplexity query will see that citation persist across many runs of the same query. That stability extends to Deep Research sessions: a brand well-cited in Perplexity's standard citation pool will appear consistently in Deep Research sessions covering the same category.
The convergence signal as a quality marker
The chair model's output tells users which facts have multi-model support. When all three models agree that a brand is a relevant option for a given category, the convergence signal tells the user that answer has independent validation from three systems trained by three different companies on three different corpora.
This is a quality signal B2B buyers have not had before. A standard Perplexity answer represents one platform's citation pool. A Council answer represents the intersection of three.
For brands with strong multi-platform presence, this is a competitive advantage. For brands present on only one platform, the Council may surface them as "noted by only one model," which signals fragility rather than authority. The buyer sees that signal explicitly. A brand cited by all three models appears as a settled answer. A brand cited by one appears as an open question.
The 3.2x higher AI mention rate for brands with 10+ independent citing domains is consistent with this: multi-source presence correlates with AI mention frequency across all platforms. Model Council makes that multi-source logic explicit to the user rather than keeping it implicit in a single model's output.
What the optimization strategy looks like
The three-pool problem has a three-pronged answer. Each pool has a different primary entry path.
For ChatGPT/GPT-5.4 training data presence: G2 reviews, Wikipedia entries, third-party editorial mentions, and Reddit threads in relevant communities are the main inputs. The Apollo.io AEO case study shows the scale effect from a well-executed Reddit strategy: 63% brand citation rate on awareness prompts, with a single competitor comparison thread generating 3,000 additional citations in one week.
For Claude's citation pool: long-form LinkedIn articles targeting specific B2B queries, coverage in premium journalism, and structured reference content that reads as authoritative. Claude favors evergreen authority content over freshness signals. Perplexity-optimized content also helps here, since Perplexity's citation pool overlaps partially with Anthropic's training preferences for B2B topics.
For Gemini's index: structured content with JSON-LD schema, sequential heading hierarchies, and FAQ schema that raises AI citation rates by 350%. Google's AI Overviews and AI Mode optimization work transfers directly to Gemini's contribution in Model Council answers, since Gemini 3.1 Pro draws from the same underlying Google citation pool.
The brands that will appear consistently in Council convergence signals are those that have built presence in all three pools through the tactics specific to each. There is no shortcut that crosses platform lines. A Wikipedia entry that helps ChatGPT does not automatically build Gemini presence. A Gemini-optimized structured page does not automatically enter Anthropic's training corpus.
Want to know where your brand stands across all three Model Council citation pools?
We audit your current presence in ChatGPT training data, Claude's citation pool, and Gemini's index separately, then build the platform-specific content and off-site signals to close each gap.
Book a Discovery CallFAQ
What is Perplexity Model Council?
Perplexity Model Council is a feature for Perplexity Max subscribers ($200/month). It runs GPT-5.4, Claude Opus 4.6, and Gemini 3.1 Pro simultaneously on every query. A chair model synthesizes all three outputs, flagging where they converge, where they disagree, and what each contributed uniquely. The result is a multi-model verified answer rather than a single-platform response. Model Council launched February 5, 2026 and is available on web only.
Does being in Perplexity's standard citation pool guarantee appearing in Model Council?
No. Perplexity's standard citation pool feeds Perplexity's own layer of the Council answer. The GPT-5.4 and Gemini 3.1 Pro components each draw from their own separate pools. A brand well-cited in Perplexity's standard responses will appear in the Perplexity portion of the Council analysis, but may be absent from the ChatGPT and Gemini portions. Appearing as a high-confidence convergence signal requires presence in all three pools independently.
Who are Perplexity Max subscribers and why do they matter for B2B brands?
Perplexity Max costs $200/month or $2,000/year. Perplexity's own documentation describes its primary use cases as high-stakes decisions exceeding $50,000, M&A due diligence, and complex strategic analysis. These are professional researchers, analysts, and enterprise decision-makers running structured research processes. For B2B SaaS companies selling into mid-market and enterprise accounts, this user profile has high overlap with the buyers most likely to influence or make large purchasing decisions.
How does Model Council relate to Perplexity Deep Research?
Deep Research and Model Council are separate features that can work together for Max subscribers. Deep Research runs long agentic research sessions before producing a detailed answer; it was upgraded to Claude Opus 4.6 in April 2026, which runs at 36% hallucination rate compared to GPT-5.5's 86% on citation-sensitive tasks. Both features draw from Perplexity's standard citation pool, but Deep Research produces longer outputs where citation pool presence has more visible impact per query.
How do I know if my brand appears in a Model Council convergence signal?
There is no direct audit tool for Model Council citation presence. The proxy is testing your brand visibility separately on ChatGPT, Perplexity, and Gemini using category-level vendor evaluation queries. A brand that appears consistently across all three independent platforms in separate testing will likely appear as a convergence signal in Council queries. Brands that appear on one platform but not others will either appear as divergence flags or be absent from the Council output.
The buyer running all three models on your category
There is one practical framing that makes the Model Council issue concrete.
The buyer sitting at a Perplexity Max account, typing "best [your category] for enterprise teams," is not getting one AI opinion. They are getting three independent AI opinions synthesized into a convergence analysis. The brands that appear as high-confidence answers are those present in three separate citation systems trained by three different companies.
Only 7.4% of Fortune 500 companies currently implement AI search optimization, according to Searchfy.ai's April 2026 study. For B2B SaaS companies below Fortune 500 scale, the percentage is lower still. Most brands have optimized for Google. Some have started on ChatGPT. Very few have built presence across all three of the citation pools that now feed Model Council's convergence analysis.
Brands on four or more AI platforms are 2.8x more likely to appear in ChatGPT responses than brands visible on only one platform. The multi-platform presence effect is not new. What Model Council does is make that effect explicit to users: they now see the citation gap directly, not just the absence of a brand from a single answer.
The brands that close that gap now will show up as convergence signals in the research sessions of the most research-intensive buyers on any AI platform. That is a durable advantage in the categories where B2B deals are won by whoever shapes the shortlist before the first demo request.
Continue the brief
Perplexity Runs Three AI Models at Once on Every Research Query. Is Your Brand in All Three?
Perplexity Model Council launched in February 2026. It runs GPT-5.4, Claude Opus 4.6, and Gemini 3.1 Pro simultaneously on every query, then synthesizes where they agree. A brand cited by all three appears as a high-confidence answer. A brand cited by none is completely absent. This is the highest multi-platform citation standard any AI platform has ever set.
ChatGPT Workspace Agents Are Running Research on Your Category Right Now. Your Brand May Not Be in the Output.
On April 22, OpenAI launched ChatGPT Workspace Agents for Business and Enterprise plans. These are scheduled, long-running AI research agents that run automatically and feed outputs into Slack, Salesforce, Google Drive, and Notion. They draw from ChatGPT's citation pool every time they run. Brands absent from that pool are excluded from the output of every enterprise research workflow that uses one: automatically, repeatedly, with no human ever noticing the gap.
Google Embedded AI Overviews in Gmail and Drive on April 22. Every Major GEO Publication Missed It.
Gmail AI Overviews launched April 22, 2026. Google Drive AI Overviews went generally available in late April. Both use the same Gemini citation pool that feeds AI Mode, AI Overviews, and the Gemini app. Neither launch was covered by any major GEO or AI search publication. For B2B brands, this means enterprise buyers are now getting Gemini-synthesized vendor research from inside their email client and document storage.
Framework
Learn the CITE framework behind our GEO and AEO work
See how Comprehend, Influence, Track, and Evolve turn AI visibility into an operating system.
Services
Explore our managed GEO services and AEO execution model
Audit, prompt discovery, content execution, and ongoing monitoring tied to AI search outcomes.
GEO Agency
See what a managed GEO agency should actually do
Compare real GEO operating work against generic reporting or tool-only approaches.
Audit
Start with an AI visibility audit before execution
Understand prompt coverage, recommendation gaps, source mix, and where competitors are winning.