AEO 101Single source of truth on AEO · Updated May 13, 2026Read it

For PR and communications teams

Turn your press into AI citations.

Your placements cost real money every month. You can't tell whether they're actually moving the needle in AI answers about your brand. We close that loop, week by week.

§01 Why AI search is reshaping PR

The press placement is no longer the end of the work.

ChatGPT now serves more than 800 million weekly users, and a large share of brand research happens inside answer engines before any direct site visit. When a journalist, analyst, customer, or recruiter wants to know what a brand stands for, they increasingly ask an LLM first.

For comms teams, that changes the unit of work. A Forbes or WSJ hit used to be the end of a campaign. Now it is an input. The question is whether the placement entered the source pool that AI cites when a prompt about your brand runs. Often it did not. The piece ran, the clipping went in the report, and the AI answer did not move.

The work is engineering placements for AI extraction, attributing source-pool lift to specific pieces, and managing the narrative across every surface AI reads. PR teams have always cared about share of voice. The new metric is share of cited answer.

§02 What we do for PR and comms teams

Six lines of work, run weekly, owned by us.

We sit alongside your comms function. Press strategy, agency relationships, and spokesperson selection stay with you. The AI visibility layer is our remit, run as a delivered outcome, not a tooling subscription.

01

Brand sentiment audit across the five AI surfaces

We run a fixed set of brand and category prompts against ChatGPT, Claude, Gemini, Perplexity, and AI Overviews. The output is a documented read of what AI says about your brand today, where the sentiment came from, and which cited sources are doing the work.

02

Citation attribution from press placements

When a Forbes, WSJ, or trade-press placement runs, the comms team needs to know whether it entered the AI source pool. We trace citation lift on the relevant prompts inside seven days of the placement, and we report which placements moved AI and which did not.

03

Target publication selection by source-pool position

Not all earned media is equal in AI. A small set of publications dominate the cited source pool for each category. We tell you which outlets AI actually reads from for your topics, so the comms calendar is built around placements that compound visibility instead of vanity hits.

04

Narrative consistency across surfaces

AI answers drift when your owned site, Wikipedia, LinkedIn, executive bios, and trade-press coverage describe the brand differently. We reconcile the narrative across surfaces so the model has one story to retrieve, then we monitor for drift weekly.

05

AI-citation-grade press strategy

Most press releases are not written for AI extraction. We work with the comms team on release structures, quote formats, and supporting data assets that get cited at a higher rate. The pitch list and the writing change. The placement results compound.

06

Weekly sentiment and citation monitoring

Sentiment in AI answers moves. A crisis hits, a competitor places a hostile story, an analyst note lands. We monitor the brand prompt set every week and flag movement before it becomes the consensus answer. The comms team gets a response window measured in days, not quarters.

§03 The outcomes we commit to

Named results, written into the engagement letter.

We deliver results, not dashboards. The pilot pricing is built around it. You pay €500 per month for tools and APIs plus your direct media spend. We carry the team. At the end of the 90-day pilot, if we hit the goal we agreed on day one, the engagement converts on a €6,000 success fee and a €2,500 per month retainer thereafter. If we miss, you walk. No further obligation.

Citation share on brand and category prompts

A fixed set of prompts covering your brand, your spokespeople, and your category. The success metric is documented citation share lift across the five AI surfaces.

Sentiment lift on brand-defining prompts

Where the brand is being framed negatively or inaccurately in AI answers, we commit to a target sentiment shift on the specific prompts that decide reputation.

Source-pool inclusion for target publications

We commit to entering the cited source pool with placements in the publications AI actually reads for your topics, not the ones with the prettiest logos.

Wikipedia and entity-graph accuracy

Where Wikipedia and Wikidata feed into your AI answer, we commit to a documented accuracy state across the entity surfaces models retrieve from.

§04 Who this is for

Comms leaders carrying budget without an AI visibility read.

The typical engagement is a Head of Communications, VP Comms, or Chief Brand Officer at a brand running real earned-media budget through one or more agencies. The press placements are landing. The traditional clipping report still gets sent. The board is starting to ask what AI says about the brand, and the answer is somewhere between vague and absent.

You usually come to us because of one of three triggers. The CEO ran a prompt about the brand and did not like the result. A competitor is showing up in AI answers and you cannot tell which of their placements caused it. Or the comms agency is asking what AI visibility work belongs in their scope and what belongs to a separate operating discipline.

§05 How we work

One framework, applied weekly. The methodology is public.

The work runs on the CITE framework. We comprehend the prompt set, influence the source pool, track citation and sentiment movement on a weekly cadence, and evolve the program as platforms shift. The research underneath is published openly.

§06 FAQ

The questions PR buyers ask before they engage.

How do AI systems decide which press to cite?
Models do not cite press uniformly. Each LLM and answer engine pulls from a recurring set of trusted domains for a given topic, and that set is narrower than most comms teams assume. The factors that decide inclusion are domain authority, topic relevance, recency, structural readability (clear headings, quoted attributions, clean passage structure), and the presence of supporting data or named expert sources inside the piece. A Forbes contributor post and a WSJ news desk story are not weighted equally even though both appear on prestigious mastheads. Knowing which outlets AI actually reads for your category is half the battle.
Why does my Forbes mention not show up in ChatGPT?
Three common reasons. First, the piece is a Forbes Council or contributor post on a subdomain that AI weighs lower than the main editorial line. Second, the piece is well written but does not contain the structured passages AI extracts: clean attributions, quoted expertise, supporting statistics. Third, the brand mention is in passing rather than in a position that signals the article is about the brand. Earned media has to be engineered for AI extraction, not just for the placement itself.
Can negative sentiment in AI answers be fixed?
Yes, but the path is slower than fixing a Google ranking. AI answers reflect a weighted average of the source pool, so the fix is to change the composition and recency of that pool. That means new authoritative coverage, refreshed owned-site content, corrected Wikipedia where applicable, and direct outreach to the publications that carry the negative framing. Most sentiment corrections we run take three to six months of consistent source-pool work to land. The pilot model is well suited to it because the goal is specific and measurable.
Does Wikipedia and Wikidata still matter?
Yes, more than most comms teams assume. Wikipedia is one of the most heavily weighted training and retrieval sources across major LLMs, and Wikidata feeds entity graphs that decide whether a brand has a clean machine-readable identity. Inaccurate or stale Wikipedia content shows up directly inside AI answers, sometimes word for word. Wikipedia is also one of the harder surfaces to touch ethically; we work with experienced editors and follow the platform's notability and conflict-of-interest rules.
How do I attribute AI visibility back to specific PR placements?
Through a controlled prompt set and a citation log. We instrument your category and brand prompts on day one. Every placement is tagged. When a new piece runs, we re-run the prompts inside the same week and document any change in cited sources or sentiment. Over a quarter, the comms team gets a clear read of which placements entered the source pool, which did not, and how each one moved the metrics that matter. The dashboard your team reads is the same one ours does.

Ready to become the answer AI gives?

Book a 30-minute discovery call. We'll show you what AI says about your brand today. No pitch. Just data.