Technical GEO8 min read

Alt Text Helps Google. It Does Nothing for AI Citations.

CS

Cite Solutions

Research · April 21, 2026

AEO takeaway

Key takeaways for AI citation readiness

Make every important page easier for answer engines to quote, trust, and reuse.

01

Key move

Lead each section with a direct answer block before expanding into detail.

02

Key move

Put evidence close to the claim so AI systems can extract support cleanly.

03

Key move

Use schema and strong information architecture to improve eligibility, not as a gimmick.

On April 21, 2026, Otterly.ai published a controlled experiment with a simple premise: can facts embedded only in image filenames, alt text, or captions get detected and cited by AI search engines?

The answer was no. Across six page variations tested on five AI search platforms, image metadata alone produced zero citations. Facts that existed only in alt attributes, filenames, or caption text were invisible to ChatGPT, Perplexity, Google AI Overviews, and the other platforms Otterly tested. The platforms "read right past" the image metadata when extracting citable content.

This finding belongs to a growing body of controlled GEO research showing that several traditional SEO signals have no effect on AI citation behavior. If you write alt text to help Google index your images, you are doing the right thing for traditional search. For AI citations, it does not count.

Otterly controlled experiment series

Three experiments. The same conclusion each time.

Otterly tested each SEO signal type across 5–6 AI search platforms (ChatGPT, Perplexity, Google AI Overviews, and others).

01

Raw Markdown pages

0% citation rate

Zero AI search visits during test period

02

Hidden text

Ignored by 4 of 6 platforms

Not treated as citable content even when crawled

03

Image alt text, filenames, captions

Undetected across all platforms

No citations when claim appeared only in metadata

What gets cited — Google vs AI search engines

Signal
Google SEO
AI citation
Visible body text
Yes
Yes
H1/H2/H3 headings
Yes
Yes
Image alt text
Yes
No
Image filename
Yes
No
Figure captions only
Partial
No
Hidden / display:none text
No
No
Raw Markdown (unrendered)
No
No

The rule: Every fact you want AI to cite must appear in visible body text. Metadata, attributes, and image-associated signals are not in scope for AI citation extraction.

Sources: Otterly.ai GEO experiment series (2026) — image alt text experiment published April 21, 2026

What Otterly actually tested

The methodology isolated one variable: whether facts could be detected by AI search engines when those facts appeared only in image-associated metadata rather than in visible body text.

Otterly built six page variations and tested them across five platforms. In the test conditions, key facts, statistics, and claims appeared exclusively in image filenames, alt attributes, or captions without any corresponding mention in the page's body paragraphs. The control was a version with the same facts in visible text.

The result was unambiguous. AI platforms extracted and cited content from the visible body text versions. The metadata-only versions produced no citations. The conclusion Otterly drew: "image metadata alone is not enough" for AI search visibility.

This is a controlled result, not a correlation. Otterly isolated the variable, ran it across multiple platforms, and got a consistent outcome. The finding is not "alt text might matter less" but "alt text, filenames, and captions-without-body-text have no detected contribution to AI citation."

Why AI citation engines skip image metadata

The underlying reason makes technical sense once you understand how AI citation works.

AI search platforms retrieve web content by sending crawlers to fetch page HTML, then processing that content through an extraction layer that identifies passages suitable for citation. The extraction layer is built around textual content, not image attributes. Alt text is an accessibility attribute processed by screen readers and used by Google for image indexing. It is not a factual content signal for language model extraction.

When GPTBot, ClaudeBot, or PerplexityBot crawls a page, the retrieval process tokenizes the visible text content. An <img alt="Conversion rate improved 42% after implementing answer blocks"> attribute does not produce the same tokenized factual content as a paragraph that reads: "After implementing structured answer blocks, conversion rate improved 42%." The first is metadata attached to a visual element. The second is a factual statement in the document's text structure.

Google Images is specifically trained to use alt text for image relevance ranking. That has no equivalent in how language models process pages for citation. The two systems are measuring different things.

This distinction also explains why image captions fail when they exist only as figure labels. A <figcaption> element is readable by AI crawlers, but a caption that says "Figure 1: Conversion rate comparison" without also having the relevant facts stated in the surrounding prose is not providing a complete, citable factual claim. AI extraction is looking for self-contained passages. A caption that references a chart without explaining the finding is incomplete as a citation target.

The pattern across Otterly's three experiments

The alt text finding is the third in a series of Otterly controlled experiments testing whether specific SEO signals translate to AI citation behavior. Each one produced a consistent negative result.

The first experiment tested raw Markdown. Pages served as unrendered .md files with Markdown formatting syntax produced zero AI search visits during the test period. Otterly's Markdown finding is now well-established in GEO research: AI crawlers receiving formatting characters instead of semantic HTML have nothing to extract.

The second experiment tested hidden text, content placed in display:none elements or visually hidden positions. Four of the six AI platforms tested ignored hidden text content entirely, producing no citations from facts that existed only in hidden elements.

The third experiment, published today, tested image metadata. All five platforms tested in the current experiment produced no citations from alt-text-only facts.

The pattern across three experiments is exact: things that work for traditional SEO but exist outside visible body text fail to produce AI citations. Traditional search engines crawl and index these signals because their architecture is built for broad document retrieval. AI citation engines extract passages because their architecture is built for specific factual synthesis.

What this means for your content

The practical implication is direct: every claim you want AI to cite must appear in visible body text.

This is a stricter requirement than most content teams are currently applying. A few common patterns where this creates invisible content problems.

Product comparison pages often place key differentiators in image overlays or in screenshots that get alt text like "Feature X available in Pro plan, not Starter." The actual capability claim is in the image metadata. If the body text of the page does not also state "Feature X is available in Pro and Enterprise plans but not in Starter," that claim is invisible to AI citation.

Case study pages frequently show outcome metrics in charts or graphs with alt text describing the result. If the body text of the case study says "we saw significant improvement in our key metrics" but the actual numbers are only in the chart's alt attribute, AI systems cannot cite the specific result. The number needs to appear in a prose sentence.

Landing pages with hero images sometimes put benefit claims in alt text because those claims are visually represented. "97% of users see results in 30 days" in an image's alt attribute is not in scope for AI extraction. The same claim in a paragraph beneath the image is.

Technical documentation that uses figures and diagrams often relies on figure captions to explain what the diagram shows. If the diagram explanation exists only as a caption and is not restated in the surrounding prose, AI systems extracting content for citation will not find a complete factual claim.

Not sure which of your key claims are invisible to AI citation engines?

We audit your highest-value pages for body text coverage of citable claims, identify where facts exist only in metadata or image attributes, and implement the changes that put your content in front of AI systems that can actually cite it.

Get Your AI Visibility Audit

How to check this in under ten minutes

The fastest audit method is view-source combined with a specific text search.

On any page you want AI to cite, open the page source (not the browser-rendered version). Search for your most important claims, statistics, and differentiators. If those claims appear only in alt="" attributes, <figcaption> text, or image title attributes and do not also appear in <p>, <li>, or heading tags, they are invisible to AI citation.

Look specifically for:

  • alt=" followed by a specific statistic or claim that does not appear in body text
  • <figcaption> elements where the key finding from a chart is described only in the caption
  • title="" attributes on images containing product claims

The fix in each case is the same: add the claim to the visible text. Not a vague reference to "as shown in the chart above" but the actual fact, stated clearly, in a sentence that could stand alone as a citation.

For a complete technical audit of what AI crawlers can and cannot extract from your pages, the GEO crawlability audit guide covers the broader set of issues including JavaScript rendering gaps and robots.txt configurations. The alt text issue is one entry on that list, but it tends to be missed specifically because it looks like a content optimization problem rather than a technical access problem.

How this connects to the body text principle

The Otterly findings, across all three experiments, point to the same underlying principle that passages beat pages in AI citation: AI systems extract self-contained text passages from visible content. The passage needs to be complete on its own.

An alt text claim is not a passage. A hidden text claim is not a passage. An unrendered Markdown document is not accessible as a passage. In each case, the information either does not exist in the form AI extraction requires or exists in a location AI extraction does not reach.

This principle also connects to the broader finding from Conductor's 2026 AEO/GEO Benchmarks Report that 44.2% of AI citations come from content in the first 30% of a page. AI extraction prioritizes visible, early-positioned body text. Content in metadata or late-stage visual elements is outside both the "visible" and "early" criteria.

The Otterly experimental series is building a specific kind of knowledge the GEO field has been missing: controlled tests of individual variables rather than correlational studies of citation patterns. The three experiments to date have collectively mapped which content types are and are not in scope for AI citation extraction. For anyone building a technical GEO strategy, that map is more useful than general advice about "optimizing for AI."

FAQ

Does image alt text affect AI citations?

Based on Otterly's April 2026 controlled experiment, no. Facts that appeared only in image alt text, filenames, or captions, without any corresponding mention in visible body text, produced zero citations across five AI search platforms tested. Alt text continues to matter for traditional Google SEO and image search, as well as for accessibility. Its absence from AI citation behavior is specific to how language models extract and synthesize factual content from pages.

Why doesn't AI search use image alt text like Google does?

Google Images is built to match visual content to user queries, and alt text is part of that matching system. AI citation engines are built to extract factual text passages from pages and synthesize answers. These are different architectures with different input requirements. An alt attribute is metadata attached to an image element. A prose sentence with the same claim is a fact in the document's text structure. Language models processing pages for citation extract from the text structure. Image attributes are outside that scope.

What content signals do AI citation engines actually use?

From Otterly's controlled experiments and corroborating research, the consistent pattern is that AI citation engines extract from visible body text, rendered as HTML. Specific signals that improve citation rate include: facts stated clearly in body paragraphs, FAQ schema markup (which Otterly found produces a 350% citation increase), content positioned in the first 30% of a page, and lists or numbered structures rather than dense prose. Signals that do not produce citations include raw Markdown, hidden text, and image metadata. The FAQ schema and crawlability research covers the positive signals in detail.

What is the fastest way to find alt-text-only claims on my site?

View the page source of your highest-priority pages and search for alt=". Review each instance where the alt text contains a specific fact, statistic, or product claim. Then search the same page source for that specific text in <p> or <li> tags. If the claim exists only in the alt attribute and not in body text, it is invisible to AI citation. The fix is to add a sentence in the body text that states the same fact directly.

Yes. Otterly's experiment covered alt text, image filenames, and figure captions that exist only as labels rather than as factual statements in prose. The same principle applies to title attributes on images and to text that appears only in CSS-generated content, data attributes, or aria labels. If the content is not part of the rendered visible text of the page, AI citation extraction cannot reach it.

The requirement is precise

Alt text has a job. It helps Google understand images and it makes pages accessible to screen reader users. Those are real and important functions. AI citation is just not one of them.

The Otterly experiment series has now produced three controlled results showing the same thing: facts that exist outside visible body text do not get cited by AI search engines. Markdown formatting, hidden elements, and image metadata are each invisible in their own way.

The requirement is specific and actionable. Every claim you want AI to cite needs to appear in a visible prose sentence or list item in the rendered HTML of your page. If it exists only as an attribute, a caption reference, or a hidden element, it is not in scope.

Your best claims may be invisible to AI search right now.

We run a full content and technical audit, identify where key claims exist only in image metadata or non-body text, and build the citation program that puts your evidence where AI systems can actually find it.

Book a Discovery Call

Ready to become the answer AI gives?

Book a 30-minute discovery call. We'll show you what AI says about your brand today. No pitch. Just data.