For PR and communications teams
Turn your press into AI citations.
Your placements cost real money every month. You can't tell whether they're actually moving the needle in AI answers about your brand. We close that loop, week by week.
§01 Why AI search is reshaping PR
The press placement is no longer the end of the work.
ChatGPT now serves more than 800 million weekly users, and a large share of brand research happens inside answer engines before any direct site visit. When a journalist, analyst, customer, or recruiter wants to know what a brand stands for, they increasingly ask an LLM first.
For comms teams, that changes the unit of work. A Forbes or WSJ hit used to be the end of a campaign. Now it is an input. The question is whether the placement entered the source pool that AI cites when a prompt about your brand runs. Often it did not. The piece ran, the clipping went in the report, and the AI answer did not move.
The work is engineering placements for AI extraction, attributing source-pool lift to specific pieces, and managing the narrative across every surface AI reads. PR teams have always cared about share of voice. The new metric is share of cited answer.
§02 What we do for PR and comms teams
Six lines of work, run weekly, owned by us.
We sit alongside your comms function. Press strategy, agency relationships, and spokesperson selection stay with you. The AI visibility layer is our remit, run as a delivered outcome, not a tooling subscription.
01
Brand sentiment audit across the five AI surfaces
We run a fixed set of brand and category prompts against ChatGPT, Claude, Gemini, Perplexity, and AI Overviews. The output is a documented read of what AI says about your brand today, where the sentiment came from, and which cited sources are doing the work.
02
Citation attribution from press placements
When a Forbes, WSJ, or trade-press placement runs, the comms team needs to know whether it entered the AI source pool. We trace citation lift on the relevant prompts inside seven days of the placement, and we report which placements moved AI and which did not.
03
Target publication selection by source-pool position
Not all earned media is equal in AI. A small set of publications dominate the cited source pool for each category. We tell you which outlets AI actually reads from for your topics, so the comms calendar is built around placements that compound visibility instead of vanity hits.
04
Narrative consistency across surfaces
AI answers drift when your owned site, Wikipedia, LinkedIn, executive bios, and trade-press coverage describe the brand differently. We reconcile the narrative across surfaces so the model has one story to retrieve, then we monitor for drift weekly.
05
AI-citation-grade press strategy
Most press releases are not written for AI extraction. We work with the comms team on release structures, quote formats, and supporting data assets that get cited at a higher rate. The pitch list and the writing change. The placement results compound.
06
Weekly sentiment and citation monitoring
Sentiment in AI answers moves. A crisis hits, a competitor places a hostile story, an analyst note lands. We monitor the brand prompt set every week and flag movement before it becomes the consensus answer. The comms team gets a response window measured in days, not quarters.
§03 The outcomes we commit to
Named results, written into the engagement letter.
We deliver results, not dashboards. The pilot pricing is built around it. You pay €500 per month for tools and APIs plus your direct media spend. We carry the team. At the end of the 90-day pilot, if we hit the goal we agreed on day one, the engagement converts on a €6,000 success fee and a €2,500 per month retainer thereafter. If we miss, you walk. No further obligation.
Citation share on brand and category prompts
A fixed set of prompts covering your brand, your spokespeople, and your category. The success metric is documented citation share lift across the five AI surfaces.
Sentiment lift on brand-defining prompts
Where the brand is being framed negatively or inaccurately in AI answers, we commit to a target sentiment shift on the specific prompts that decide reputation.
Source-pool inclusion for target publications
We commit to entering the cited source pool with placements in the publications AI actually reads for your topics, not the ones with the prettiest logos.
Wikipedia and entity-graph accuracy
Where Wikipedia and Wikidata feed into your AI answer, we commit to a documented accuracy state across the entity surfaces models retrieve from.
§04 Who this is for
Comms leaders carrying budget without an AI visibility read.
The typical engagement is a Head of Communications, VP Comms, or Chief Brand Officer at a brand running real earned-media budget through one or more agencies. The press placements are landing. The traditional clipping report still gets sent. The board is starting to ask what AI says about the brand, and the answer is somewhere between vague and absent.
You usually come to us because of one of three triggers. The CEO ran a prompt about the brand and did not like the result. A competitor is showing up in AI answers and you cannot tell which of their placements caused it. Or the comms agency is asking what AI visibility work belongs in their scope and what belongs to a separate operating discipline.
§05 How we work
One framework, applied weekly. The methodology is public.
The work runs on the CITE framework. We comprehend the prompt set, influence the source pool, track citation and sentiment movement on a weekly cadence, and evolve the program as platforms shift. The research underneath is published openly.
§06 FAQ
The questions PR buyers ask before they engage.
How do AI systems decide which press to cite?
Why does my Forbes mention not show up in ChatGPT?
Can negative sentiment in AI answers be fixed?
Does Wikipedia and Wikidata still matter?
How do I attribute AI visibility back to specific PR placements?
Ready to become the answer AI gives?
Book a 30-minute discovery call. We'll show you what AI says about your brand today. No pitch. Just data.