The short answer is no.
On May 7, 2026, Google stopped displaying FAQ rich results in Search. The visual SERP enhancement is gone. By June 2026, the Search Console FAQ report disappears. By August 2026, the Search Console API drops FAQ support entirely. That timeline is real, and the announcement is in Google's own structured data documentation.
What Google did not do is deprecate the FAQPage schema itself. The markup still parses, still validates, and is still used to understand pages. Search Engine Journal confirmed this distinction in its May 10 coverage: "Web developers and SEO professionals need not rush to remove FAQ schema."
So the question your dev team is probably asking right now, the one being raised in every SEO Slack channel since the announcement, is the wrong question.
The real question is what FAQPage schema does for you in 2026, now that Google is no longer the only retrieval surface that reads it.
Citation rate by content format
FAQ schema produces the largest single-format citation boost
Based on Otterly's analysis of 1 million+ AI citations across ChatGPT, Perplexity, and Google AI Overviews (2026).
AI crawlers receive formatting characters instead of semantic HTML
Readable but AI must infer structure from prose
74.2% of cited content uses list or numbered structure
Highest single-format effect measured in Otterly's 1M citation study
73%
of sites
73% of websites in Otterly's study had crawlability issues that prevented AI systems from reading their content. Schema optimization has zero impact until this is fixed.
Community content
52.5%
of AI citations
Brand-owned content
47.5%
of AI citations
What actually changed on May 7
A small but important distinction.
The FAQ rich result was a visual SERP enhancement. It expanded a question-and-answer block directly inside Google Search, taking real estate that would otherwise have gone to the blue links below it. Sites that earned the enhancement got a click-magnet feature. Sites that did not got pushed down the page.
Since August 2023, that enhancement was already restricted to a small whitelist of government and health sites. For most of the web, FAQ rich results had not appeared in years. May 7 simply ended eligibility for the few remaining sites that still had access.
The FAQPage JSON-LD schema is a separate thing. It is a structured representation of question-and-answer content embedded in page source. Google still parses it. Bing still parses it. Every major LLM that grounds answers against the web still parses it. None of that has changed.
What is being retired:
- •The FAQ rich result display in Google Search (May 7, 2026)
- •The Search Console FAQ rich result report (June 2026)
- •The Rich Results Test FAQ support (June 2026)
- •The Search Console API FAQ rich result data (August 2026)
What is not being retired:
- •The
FAQPageJSON-LD schema type itself - •Google's general parsing of FAQ structured data
- •AI platform extraction of FAQ-marked content
- •The validity of the markup in HTML
The SERP enhancement is dead. The semantic signal is not.
Why removing FAQPage schema is the worst tactical move right now
Removing schema after this kind of announcement feels logical. The rich result is gone. The Search Console report is going. The API support is going. If the schema was earning the rich result and the rich result is dead, the schema looks like dead code.
That logic is wrong because it assumes Google Search is the only retrieval surface that reads your structured data.
In 2026, it is not. AI search platforms have become primary citation surfaces for B2B buyers, and they treat FAQPage JSON-LD as one of the cleanest extraction signals available.
What Otterly found
Otterly analyzed over a million AI citations across ChatGPT, Perplexity, and Google AI Overviews. Pages with FAQ schema markup were cited at 2,379 times versus a baseline of 529 for unmarked content. That is a 350% citation lift. The full numbers are in our FAQ schema and AI citations analysis.
That lift exists because of how LLMs decode pages. The schema gives the model a pre-parsed question-and-answer pair. The model does not have to guess what is a question, what is an answer, or what is supplementary prose. The structure is explicit, and the extraction is cheap. When a user asks something and your FAQ entry already contains the answer in clean JSON-LD, the model has every incentive to pull it.
What the major platforms do with FAQPage data
Different surfaces, same behavior. Each treats FAQPage JSON-LD as a primary extraction signal:
- •ChatGPT with web search: parses FAQPage JSON-LD when grounding answers, often quoting the answer text verbatim
- •Claude with web search: uses structured Q&A pairs as preferred snippet sources for follow-up reasoning
- •Perplexity: ranks pages with clean FAQPage markup higher in its source ranking pipeline
- •Google AI Overviews: uses FAQPage data to assemble multi-source answers even when no rich result is shown
- •Google AI Mode: similar behavior, with FAQ entries appearing as discrete extraction units in synthesized answers
This is the part teams miss. Google killed the rich result. Google did not stop reading the schema. AI Overviews and AI Mode still rely on it.
What FAQPage schema actually signals to an LLM
A short detour into how this works under the hood, because the answer to "should I remove this" depends on understanding what the schema is doing for you.
When an AI platform decides what to cite, it does not read your full page. It segments your content into passages and ranks each passage independently against the user query. Our passages beat pages guide covers the mechanic in detail.
FAQPage JSON-LD does three things in that pipeline:
- •
It pre-segments your content. The model does not have to guess where one answer ends and the next begins. Each
QuestionandacceptedAnswerpair is already a passage. - •
It removes ambiguity about intent. A heading like "Why pricing changes mid-quarter" could be the start of a section, a list item, or part of a sidebar. A
Questionfield in FAQPage JSON-LD is unambiguously a query the page intends to answer. - •
It increases passage extraction confidence. Models extract more aggressively from sources they parse with high confidence. Structured data raises that confidence. Unstructured prose lowers it.
That is why FAQ schema delivers a 350% citation lift even on pages where the visible content is identical to unmarked competitors. The markup is the difference.
Want to know which of your pages are actually being cited by AI?
Our AI visibility audit maps every prompt your buyers run, every source the model cites, and every gap between your content and the citation pool. Two weeks. Plain English. Real fix-it list.
Book a Discovery CallHow traditional SEO and AI search see FAQPage schema differently
Traditional SEO asks:
- •Does this earn a rich result?
- •Does this expand the SERP footprint?
- •Does this show up in the Rich Results Test?
- •Does the Search Console FAQ report show impressions?
AI search asks:
- •Does this give the model a clean Q&A pair?
- •Does this match how a real buyer would phrase the query?
- •Does the answer text stand alone as a passage?
- •Is the structure machine-parseable without inference?
The first set of questions is now answered by "no" across the board. The second set is unaffected by the May 7 change.
Most teams will read the deprecation headline, apply the first set of questions, and conclude the schema is dead weight. The teams that win in AI citations are the ones running the second set.
Five reasons to keep FAQPage schema on your site
These are the load-bearing arguments. Each one alone justifies keeping the markup. Together they make removal a clear strategic mistake.
Reason 1: FAQPage schema drives a 350% AI citation lift
The Otterly study is the single largest dataset we have on AI citation behavior, and structured FAQ markup was one of the strongest predictors of inclusion. Removing the schema directly trades that lift for cleanup that nobody outside your dev team will see.
Reason 2: AI Overviews still parse it even without rich results
Google AI Overviews and AI Mode are not the same as the classic blue-link SERP. They synthesize answers from multiple sources, and FAQPage JSON-LD is one of the inputs. The May 7 change affected the visual rich result, not the AI Overviews retrieval pipeline.
Reason 3: ChatGPT Search and Claude rely on machine-parseable Q&A
When a user asks "what is the difference between X and Y" and your page has that exact question in FAQPage > Question, the model has already done half the work of answering them. Strip the schema and the model has to infer from H2 headings and paragraph structure. The inference is slower, less confident, and more likely to skip you.
Reason 4: Perplexity's source ranking favors structured pages
Perplexity ranks sources before generating answers. Pages with explicit semantic structure rank higher than pages where the same content is unmarked prose. FAQ schema is a clean, low-cost way to push your pages up that internal ranking, and the lift compounds over multi-source answers.
Reason 5: The cost of keeping it is zero
The markup is already on the page. It does not block rendering. It does not affect Core Web Vitals. It does not show up to users. Removing it requires a deployment, a regression test, and a regression risk. Keeping it requires nothing.
What to actually do this month
Five concrete actions, in order.
Step 1: Confirm your FAQPage schema is still valid
Run a validator against your highest-traffic pages with FAQ markup. Make sure the JSON-LD parses, the questions match the visible content on the page, and there are no warnings. Mismatches between schema and visible content are an Ahrefs flag and an AI parsing problem at the same time.
Step 2: Refactor your FAQ entries to match real buyer queries
Most FAQ sections were written for SEO keyword targeting. AI citation rewards a different style. Each question should read like something a buyer would type into ChatGPT or Perplexity. Each answer should stand alone in 40 to 80 words. Our schema deployment matrix by page type covers which page types benefit most from this rewrite.
Step 3: Add FAQPage schema to pages that do not have it
Especially product, pricing, comparison, and service pages. These are the page types where AI Overviews and ChatGPT pull most often, and most of them have no FAQ markup at all. Adding it is a cheap change with a measurable citation upside.
Step 4: Migrate Search Console FAQ tracking to alternative measurement
The API support ends in August 2026. If you have dashboards pulling FAQ rich result data, replace them now. Move the same tracking into AI citation monitoring. Citation share by page type and citation lift after schema deployment are more useful metrics than rich result impressions ever were.
Step 5: Audit for FAQPage entries that lie
The single fastest way to lose citations is mismatched schema. A FAQPage acceptedAnswer that does not appear visibly on the page used to be a Google penalty. It is now an AI hallucination risk. Models that pull a non-existent answer from your schema will be flagged by the user, and the page may stop getting cited.
The audit is simple. For every FAQ entry, confirm the visible page contains the same answer text, or a direct paraphrase of it. Fix or remove the entries that fail.
Get FAQPage schema deployed across your highest-value pages
The CITE framework deploys answer blocks and structured data engineered for AI citation. Two weeks to a measurable lift in ChatGPT, Claude, Perplexity, and AI Overviews.
See What We DoWhat this looks like in practice
A short worked example.
Take a B2B SaaS pricing page. The traditional FAQ section asks: "What does the Pro plan include?" with a bulleted answer of features. That works as content, but as an AI citation it is weak. A buyer would not type that question into ChatGPT.
The AI-optimized version asks: "What is the difference between the Pro and Enterprise plans for a 200-person team?" The answer is a 60-word passage that compares specific limits and names the inflection point. Same content area. Different question, different answer, different citation behavior.
Pages structured this way show up in ChatGPT Pro plan comparisons. Pages structured the old way show up nowhere.
The pattern repeats across every page type. Comparison pages, integration pages, service pages, support pages. The schema is the easy part. Rewriting the questions and answers to match how buyers actually ask is the work that produces the citation lift.
For the broader pattern across page types, see our AEO schema audit guide and how AI platforms choose which sources to cite.
FAQ
Is FAQPage schema deprecated as of May 7, 2026?
No. Google deprecated the FAQ rich result, which was the visual SERP enhancement. The FAQPage JSON-LD schema itself is not deprecated. Google still parses it, and AI platforms including ChatGPT, Claude, Perplexity, Google AI Overviews, and Google AI Mode still use it as a primary extraction signal.
What is the timeline for Search Console changes?
Google stopped showing FAQ rich results in Search on May 7, 2026. The Search Console FAQ rich result report is removed in June 2026, along with Rich Results Test support. Search Console API support for FAQ data ends in August 2026. The schema validation itself continues to work.
Will removing FAQPage schema improve my site speed or SEO?
No measurable improvement. FAQPage JSON-LD is a small block of structured data that does not affect rendering, Core Web Vitals, or organic rankings. Removing it carries deployment risk and gives up the AI citation lift documented in the Otterly study with no offsetting benefit.
Does FAQPage schema still help with Google AI Overviews?
Yes. AI Overviews and AI Mode use FAQPage JSON-LD when assembling synthesized answers from multiple sources. The May 7 change affected the classic rich result display, not the AI Overviews retrieval pipeline.
Should I add FAQPage schema to pages that do not have it?
Yes, particularly on product, pricing, comparison, and service pages. These are the page types AI platforms pull from most often, and most have no FAQ markup. Adding it is a low-cost change with documented citation lift.
What about the eight-month wait until the API is removed?
If your team uses the Search Console API to pull FAQ rich result data, plan the migration before August 2026. Replace the report with AI citation monitoring, which is what the data was actually telling you to optimize for in the first place.
The takeaway
Google killed the visible reward for FAQPage schema. The semantic reward is intact and growing. ChatGPT, Claude, Perplexity, and Google's own AI surfaces all read this markup, and the documented citation lift is one of the largest single-factor effects in the AI search literature.
Removing FAQPage schema in response to the May 7 announcement is the easy reaction. It is also the wrong one. Keep the markup, refactor the questions to match how buyers actually ask, and add the schema to the page types that do not have it. That is the position your AI citation rate will reward over the next twelve months.
Continue the brief
Why AI Engines Cite Same Brands but Different Sources
BrightEdge analyzed 5 AI engines across 9 verticals. Brand overlap clusters at 36-55%, source overlap spans 16-59%. The split that matters.
AI Overviews CTR: Cited Brands Get 2.3x More Clicks
Seer analyzed 5.47M queries across 53 brands. Cited brands earn 2.1% CTR inside AI Overviews. Uncited brands earn 0.9%. The gap is 2.3x.
Stop Optimizing URL Slugs for AI Citations
Otterly analyzed 1,028,959 URLs across six AI platforms. URL length and slug structure barely affected citations. Page type drove an 80% gap.
Framework
Learn the CITE framework behind our GEO and AEO work
See how Comprehend, Influence, Track, and Evolve turn AI visibility into an operating system.
Services
Explore our managed GEO services and AEO execution model
Audit, prompt discovery, content execution, and ongoing monitoring tied to AI search outcomes.
GEO Agency
See what a managed GEO agency should actually do
Compare real GEO operating work against generic reporting or tool-only approaches.
Audit
Start with an AI visibility audit before execution
Understand prompt coverage, recommendation gaps, source mix, and where competitors are winning.