On April 22, 2026, Google launched two autonomous research agents through the Gemini API: Deep Research and Deep Research Max, both powered by Gemini 3.1 Pro. The announcement was quiet relative to what it actually means. Most coverage treated it as a model capability story. It is, in practice, a buyer behavior story.
The capability that should catch every B2B SaaS marketer's attention: Deep Research Max is the first research agent that can combine open-web searching with a company's private data streams in a single API call, then return a fully cited, chart-embedded report.
That means an enterprise buyer's team can query their internal CRM, deal history, and cost data against the public web for vendor reputation, G2 reviews, analyst coverage, and competitor comparisons, all in one run. The output is a synthesized vendor evaluation report, not a list of search results the buyer has to read through and connect themselves.
If your brand isn't visible in Gemini's current citation pool, you won't appear in that report. You won't even be missing from the shortlist. You won't exist in the evaluation.
How Google Deep Research Max works for vendor evaluation
Enterprise buyers now use AI research agents — not Google search — to build vendor shortlists
Source: Google DeepMind Blog — Deep Research and Deep Research Max, April 22, 2026
The vendor evaluation pipeline inside Deep Research Max
Enterprise buyer asks a vendor question
"What's the best [category] tool for a 500-person SaaS company?"
Deep Research Max decomposes into sub-queries
Pricing, integrations, security, reviews, competitor comparisons, analyst coverage
Agent searches public web AND private data simultaneously
G2, LinkedIn, press coverage, Wikipedia, internal CRM notes, deal history
Synthesizes into a cited, chart-embedded report
Vendor shortlist with brand mentions, evidence citations, and visual comparisons
Which visibility signals Deep Research Max pulls from (and which it ignores)
Gemini citations (structured content)
Deep Research Max is Gemini 3.1 Pro — same citation preferences apply
G2 / Capterra reviews
Community review sites are indexed in research synthesis
LinkedIn presence
#2 cited domain across AI platforms — feeds enterprise research queries
Wikipedia page
Named source in research synthesis; high trust signal
Editorial blog posts
Gemini moved away from editorial perspective in Feb–Mar 2026
Image alt text / hidden text
Otterly confirmed these are invisible to Gemini-powered retrieval
Deep Research Max: key specs
| Capability | Value | Context |
|---|---|---|
| BrowseComp score (Gemini 3.1 Pro) | 85.9 | vs. ~60 for prior generation |
| Improvement over Gemini 3 Pro | +25 pts | On OpenAI's BrowseComp benchmark |
| Enterprise data integration | MCP + uploads | First research agent to combine web + private data |
| Output format | Full report + charts | Cited sources, embedded visualizations |
What Deep Research Max actually does
The standard Deep Research agent is built for speed and efficiency. It handles moderately complex research queries with lower latency and cost than prior Gemini research modes.
Deep Research Max is built for maximum depth. It uses extended test-time compute to iteratively reason, search, and refine. The model runs multiple search passes, cross-checks sources, and synthesizes contradictions before returning a final output. Benchmarks put it at 85.9 on OpenAI's BrowseComp, more than 25 points higher than Gemini 3 Pro on the same measure.
The features that matter specifically for vendor evaluation workflows:
The agent can combine open-web results with a company's internal data through MCP (Model Context Protocol) integrations. Google confirmed future MCP connections for FactSet and PitchBook, meaning financial and investment research data will soon be queryable alongside the open web in a single session.
The output includes full citations and embedded charts. When an enterprise team runs a vendor comparison, they receive a report they can share internally, with sources they can verify, not a chatbot answer they have to transcribe.
Availability is via Gemini API in public preview now, with Google Cloud rollout following.
Why vendor evaluation is the most important use case
Google's Deep Research agents have real uses in healthcare research and financial analysis, both mentioned in the launch materials. But for B2B SaaS brands, the use case that matters most isn't covered in any press release.
An enterprise procurement team evaluating five marketing technology vendors can now run a single Deep Research Max query combining:
- •Their internal CRM data on which vendors they've evaluated before
- •G2 reviews and comparison content for each vendor
- •LinkedIn coverage and company news
- •Analyst reports from firms with public-facing research
- •Press mentions and case studies the vendor has published
The agent synthesizes that into a vendor report. The team sees a shortlist, with citations, before they've visited a single vendor website.
Gartner projects that 60% of commercial research queries will be influenced by AI answer engines by Q4 2026, a projection cited at Adobe Summit 2026 alongside IBM's own AI visibility research. That stat was reported as a framing around general AI search. Deep Research Max is the mechanism that turns "influenced by AI" into "the shortlist was built by AI." Those are meaningfully different things.
The 42% of enterprise prospects who now use ChatGPT or Perplexity for product research before visiting vendor sites, up from 11% in early 2024 per our research on B2B AI search behavior, have been doing that with general-purpose conversational AI. Deep Research Max is purpose-built for the evaluation step that follows.
Is your brand appearing in Gemini-powered vendor evaluations?
We audit your visibility across Gemini, ChatGPT, and Perplexity with the specific prompts your buyers are using, identify where you're missing from evaluation queries, and build the content that puts you in the shortlist.
Book a Discovery CallThe Gemini citation pool your brand needs to be in
Deep Research Max runs on Gemini 3.1 Pro. That means its citation behavior follows the same patterns Seer Interactive documented in their 82,000-response study from April 13, 2026.
Gemini underwent a structural format change between February and March 2026 that most brands haven't adapted to. The citation rate dropped from 99% to 76%. Response format shifted toward heading-heavy, shorter, table-structured outputs. Editorial content from sites like Forbes and Medium took the hardest hits. Wikipedia, Reddit, and structured reference content held.
One brand in Seer's dataset went from a 96% citation rate to 3.7% in a single week. That drop didn't happen because the brand's content got worse. It happened because Gemini's format preferences changed and the brand's content type stopped matching.
Deep Research Max searches the same sources. If your content didn't survive the February-March format shift on standard Gemini, it won't appear in the research reports your buyers are now running through the API.
The content characteristics that held through Gemini's 2026 format changes:
Clear H2/H3 heading structure that makes it easy for Gemini to parse discrete answer units. Content organized in table or comparison format, not flowing prose. Reference-grade density with specific data, disclosed methodology, and cited evidence rather than editorial analysis. Source types that Gemini actively favors: Wikipedia, Reddit threads, structured comparison pages, G2 reviews.
How Deep Research Max differs from standard AI search
Standard AI search in ChatGPT, Perplexity, or Gemini works in one session. A user asks a question, the model retrieves sources, and the user gets an answer. The process is fast, shallow by research standards, and optimized for a single-turn interaction.
Deep Research Max runs iterative passes. The model retrieves initial results, identifies gaps, generates follow-up queries to fill those gaps, and repeats until it has enough confidence in the synthesis. The BrowseComp benchmark score of 85.9 reflects this iterative depth: the model is significantly better at complex research tasks than single-pass models precisely because it can check its own work.
| Dimension | Standard AI search | Deep Research Max |
|---|---|---|
| Research depth | Single-pass retrieval | Iterative multi-pass reasoning |
| Data sources | Public web only | Public web + private enterprise data (MCP) |
| Output format | Conversational answer | Full report with citations + charts |
| Session type | Single-turn | Autonomous multi-step agent |
| Benchmark (BrowseComp) | ~60 (Gemini 3 Pro) | 85.9 (Gemini 3.1 Pro) |
| Primary use case | Quick answers, discovery | Vendor evaluation, competitive analysis |
| Access | Consumer-facing interfaces | Gemini API (public preview); Google Cloud (coming) |
The private data integration is the part that makes this a fundamentally different category from general AI search. When an enterprise team can query the open web and their internal systems simultaneously, the output resembles an analyst report, not a search result. That is the format in which vendor decisions get made at $100K+ deal sizes.
The citation gap this creates for most B2B SaaS brands
There is a documented gap between brands visible in Google's organic results and brands cited by Gemini-powered AI systems.
EMGI Group's study of 150 SaaS companies across 120 keywords found that 44% of SaaS brands ranking in Google's top 10 get zero ChatGPT citations. The correlation between organic traffic and AI citation frequency is only 0.23. Topical authority correlates at 0.76.
That gap exists in standard conversational AI. Deep Research Max widens it, because the research agent isn't just pulling the top organic result for a single query. It is synthesizing across multiple passes, checking multiple source types, and prioritizing reference-grade content over editorial content.
A brand that ranks well in Google's organic results for its target keywords but has thin G2 review presence, limited LinkedIn content, and no structured comparison or reference-grade content has two separate problems now. The SEO problem has always existed. The Gemini citation problem is the new one, and it has a direct path to deal pipeline.
IBM's Alexis Zamkow told 50,000 enterprise marketers at Adobe Summit 2026 that citations are the "holy grail" of AI visibility and that 85% of brand mentions in AI answers come from external domains, not brand-owned content. Deep Research Max runs exactly that kind of synthesis, weighting external domain mentions and reference content over owned blog posts.
What the AI Overviews vs. AI Mode separation means here
One finding from Otterly.ai's April 22 experiment is directly relevant to how brands should think about Deep Research.
Using 25 AI-generated YouTube videos on a zero-subscriber channel, Otterly documented a 50-point share-of-voice differential between Google AI Mode (+53%) and Google AI Overviews (+3%) from identical content. The experiment confirmed that AI Mode and AI Overviews run on separate retrieval pipelines.
Deep Research Max is an API product built on Gemini 3.1 Pro. Its retrieval behavior isn't governed by the same pipeline as AI Overviews. Brands that have optimized for AI Overviews are not automatically visible in Deep Research Max outputs.
This is the same lesson the Otterly data taught about AI Mode: Google's AI surfaces do not share a citation pool. Content that generates AI Overviews visibility may not generate Deep Research visibility. The two optimization strategies overlap in their emphasis on structured, reference-grade content, but the source types and query patterns are different.
For B2B SaaS brands with enterprise buyers, the hierarchy now runs: Deep Research Max and similar research agents sit above AI Mode, which sits above AI Overviews, in terms of deal-stage relevance. The research agent output is what shows up in a vendor evaluation committee. The AI Overviews result is what shows up when someone first searches the category.
Optimizing only for AI Overviews is optimizing for early discovery. Deep Research Max is the evaluation checkpoint, and most brands are not yet present there.
Gemini citation presence is now a deal-pipeline variable
We run your brand through the specific query patterns enterprise buyers use for vendor evaluation in Deep Research Max, identify exactly where you're missing, and build the third-party footprint that puts you in the shortlist.
Get Your AI Visibility AuditWhat to actually do about this
Getting into Deep Research Max outputs requires the same foundational work as getting into Gemini's standard citation pool, but the stakes are higher and the source types matter more.
Build structured reference content first. Deep Research Max favors the same content characteristics Gemini has moved toward since February 2026: heading-structured pages, tables and comparison formats, specific data with cited methodology. A 1,500-word comparison page with a clear feature matrix and three cited stats outperforms a 6,000-word "ultimate guide" with flowing prose. The AirOps study of 16,851 queries found that pages in the 500-2,000 word range outperform longer content for AI citation. Deep Research applies this at scale across multiple retrieval passes.
The G2 and LinkedIn presence matters more than most content teams acknowledge. Gemini 3.1 Pro, the model powering Deep Research Max, draws from the same source preferences as standard Gemini. Reddit and Wikipedia held through the February-March citation shift while editorial content fell. G2 reviews, structured LinkedIn content, and community-sourced mentions are the source types that feed research synthesis at a higher weight than owned blog content.
Third-party citations are the primary lever. 85% of brand mentions in AI answers come from external domains, according to IBM's Adobe Summit analysis. Owned content is not the primary signal in a research synthesis. Analyst coverage, press mentions, independent review platform presence, and community discussion threads are what appear in the synthesized report. A brand that publishes well on its own domain but has weak external footprint is functionally invisible in research-grade AI outputs.
Comparison content generates 30x more brand name mentions than informational content, according to Kevin Indig's Growth Memo analysis of 3,981 domains. In a research synthesis context, this matters twice: once because the comparison content itself may be retrieved, and again because the brand name mentions in that content feed the training data that informs how the model characterizes your brand.
Monitor Gemini specifically. Only 11% of domains are cited by both ChatGPT and Perplexity. Platform overlap is minimal, and that applies equally to Gemini-powered research agents. Brands optimized for ChatGPT or Perplexity are not automatically visible in Gemini. Deep Research Max is Gemini's territory. The optimization work is different.
FAQ
What is Google Deep Research Max?
Google Deep Research Max is an autonomous AI research agent powered by Gemini 3.1 Pro, launched April 22, 2026, via the Gemini API in public preview. Unlike standard AI search, it runs iterative multi-pass research, combining public web sources with private enterprise data through MCP integrations, and returns fully cited, chart-embedded research reports. It scored 85.9 on OpenAI's BrowseComp benchmark, more than 25 points higher than the prior Gemini generation. Google plans to add FactSet and PitchBook MCP integrations for financial data access.
How does Deep Research Max affect B2B vendor evaluations?
Enterprise buyers and procurement teams can now point Deep Research Max at a vendor category question and receive a synthesized evaluation report combining external sources (G2, LinkedIn, press coverage, analyst content) with their internal CRM and deal data. The output resembles an analyst report rather than a search result. Brands not present in Gemini's citation pool won't appear in these evaluations. IBM's analysis from Adobe Summit 2026 found that 85% of brand mentions in AI research outputs come from external domains, not the brand's own website.
Is Deep Research Max different from Google AI Overviews?
Yes, they use separate retrieval pipelines. Otterly.ai's April 22 experiment documented a 50-point share-of-voice differential between Google AI Mode (+53%) and Google AI Overviews (+3%) from identical content. Deep Research Max, as an API product, runs on a separate retrieval system from consumer-facing AI Overviews. Optimizing for AI Overviews does not make a brand visible in Deep Research Max outputs. The two surfaces require distinct optimization approaches, though both favor structured reference content over editorial prose.
What content types does Gemini 3.1 Pro prefer to cite?
Seer Interactive's April 2026 study of 82,000 Gemini responses documented that, following the February-March 2026 format shift, Gemini strongly prefers content with clear H2/H3 heading structure, markdown tables and comparison formats, and reference-grade density (specific data, cited methodology). Wikipedia and Reddit held stable through the citation rate drop from 99% to 76%. Editorial content from Forbes and Medium took 12-92 percentage point drops. Since Deep Research Max runs on Gemini 3.1 Pro, the same source preferences apply.
How do I check if my brand appears in Gemini Deep Research outputs?
The most direct method is to run your own vendor evaluation queries through the Gemini API with Deep Research enabled, using the exact prompts your target buyers would use (e.g., "What is the best [your category] tool for a [buyer persona] company in [industry]?"). If your brand doesn't appear in the output, note what sources are cited and which content types dominate. The gap identifies exactly where your external footprint needs to grow. A structured AI visibility audit covering Gemini-specific citation surfaces gives you the baseline before you start optimizing.
The narrowing window
Three things happened in April 2026 that narrow the window for B2B SaaS brands without a Gemini visibility strategy.
First, Deep Research Max launched, making AI-powered vendor evaluation reports available at enterprise scale through the Gemini API.
Second, Gartner projected that 60% of commercial research queries will be influenced by AI answer engines by Q4 2026. That's two quarters away. The brands establishing Gemini citation presence now will be in the default shortlist when that threshold hits.
Third, Gemini's own citation pool changed significantly in February-March 2026. A 23-point drop in citation rate, combined with a format shift toward structured reference content, means brands that haven't adapted their content type are already losing Gemini ground, independent of Deep Research.
The combination isn't additive. It's multiplicative. A brand missing from Gemini's standard citation pool, running content that Gemini's post-February format doesn't favor, with thin G2 and LinkedIn presence, is nearly invisible across all three layers: standard Gemini search, Google AI Mode, and now Deep Research Max.
The research agent isn't an experiment to monitor. It is operational, in public preview, available to enterprise API users today. The vendor evaluations are already running.
Framework
Learn the CITE framework behind our GEO and AEO work
See how Comprehend, Influence, Track, and Evolve turn AI visibility into an operating system.
Services
Explore our managed GEO services and AEO execution model
Audit, prompt discovery, content execution, and ongoing monitoring tied to AI search outcomes.
GEO Agency
See what a managed GEO agency should actually do
Compare real GEO operating work against generic reporting or tool-only approaches.
Audit
Start with an AI visibility audit before execution
Understand prompt coverage, recommendation gaps, source mix, and where competitors are winning.