It used to be that Google searches opened up a world of questions. You searched, sifted through links, and came to your own conclusion.
Today, AI Overviews, ChatGPT, Perplexity, and other AI platforms compress multiple sources into a single, synthesized response. In the process, nuance is flattened, and certain viewpoints can be overrepresented.
This marks a fundamental shift in online reputation management. Search engines now shape the information they surface. The result is a rise in zero-click behavior, where users accept AI-generated answers without visiting underlying sources.
For brands, that changes the stakes. Visibility no longer guarantees influence. Even a No. 1 ranking can be bypassed if the narrative tells a different story.
AI narrative formation: How AI systems deliver users their answers
AI search engines now follow a new pattern for delivering answers. For the sake of this article, we’ll call it AI narrative formation. Here’s how it works.
Source pooling
AI systems pull from a wide range of sources. While you might expect trusted, peer-reviewed content, they often draw from Reddit, YouTube, review platforms, complaint forums, and social media sites like Instagram and TikTok.
Signal weighting
Not all sources carry equal weight. A single trusted source can be outweighed by a large volume of lower-quality content. For example, a highly active Reddit thread filled with negative reviews may outperform a fact-checked source like Wikipedia.
Narrative compression
AI condenses dozens of inputs into a short, digestible summary. In the process, nuance is lost, and fringe cases can become dominant themes. A complex reputation may be reduced to: “Users say this company is not trustworthy.”
Continued reinforcement
These summaries don’t stay contained. They’re screenshotted, shared, and repeated across platforms. Those repetitions become new inputs, reinforcing the same narrative in future AI outputs.
Dig deeper: The authority era: How AI is reshaping what ranks in search
See the complete picture of your search visibility.
Track, optimize, and win in Google and AI search from one platform.
Start Free Trial
Get started with


How a finance company’s solid reputation unraveled in AI search
To see how AI narrative formation works in action, let’s look at a use case.
My company recently worked with a finance organization to repair its online reputation. For this example, we’ll call it Company X.
Problems emerged for Company X with the rise of Google AI Overview. Previously, under traditional SERPs, Company X had a solid reputation. Users searching Google for reviews would find a 4.2 rating on Trustpilot, a strong company website with employee bios, and numerous positive blog reviews from trusted sources.
Google AI Overview changed that. How? By resurfacing an old Reddit forum centered on negative complaints about Company X.
When users asked Google, “What are opinions like about Company X?” AI Overview delivered a clear answer: “Company X has mixed reviews, with specific complaints regarding customer service.” But those customer service issues were resolved nearly a decade ago.
AI Overview pulled multiple reviews from that Reddit thread, combined them with strong negative phrasing, and factored in the lack of structured positive content to form a semi-negative impression. A new perception of Company X was created.
Get the newsletter search marketers rely on.
Why AI search amplifies reputational risk
We can dig deeper into how AI impacts reputational risk. Consider the following:
- How negative AI narratives spread: In traditional search, users had to dig for negative results. With LLMs, those results can surface instantly, even when they’re defamatory or incorrect.
- Hallucinations and misinformation: Most users are now aware of AI hallucinations, but they aren’t always easy to spot. Making matters worse, LLMs can present incorrect claims or factual inconsistencies with confidence.
- The snowball effect: As discussed in narrative reinforcement, AI-generated answers get screenshotted, shared, and repeated across platforms. That repetition builds momentum, creating challenges ORM firms now have to manage.
A hard truth has emerged in ORM: The most accurate claim doesn’t rise to the top. The most repeated claim does.
Dig deeper: Generative AI and defamation: What the new reputation threats look like
A step-by-step guide to auditing AI-generated narrative formation
Let’s walk through another case to see how an AI-generated narrative can be audited.
CEO X is the founder of a SaaS company. He has an ongoing thought leadership presence and a strong reputation in his industry.
On a recent podcast appearance, one quote was taken out of context and aggregated across several platforms. The quote was framed as an opinion rather than a fact. Blog posts were written, and Instagram Live reactions spread online.
In no time, ChatGPT and Google AI Overview turned CEO X into a controversial figure.
Here’s a step-by-step guide to approaching that reputation management crisis.
Step 1: Mapping queries
We begin by identifying what search engines are saying about CEO X. We ask ChatGPT and Google AI Overview questions such as “What did CEO X say?” and “What is CEO X’s current reputation?” This helps us analyze the issues.
Step 2: Capturing outputs
We identify the claims associated with CEO X. Google AI Overview and ChatGPT describe CEO X as a controversial figure who recently made comments in poor taste. The narrative formed across both platforms is trending negative.
Step 3: Delving through sources
Next, we analyze the sources AI Overviews and ChatGPT rely on. We look for whether they’re outdated, repetitive, or low quality. (In the case of Company X, the latter two apply.)
Step 4: Analyzing the narrative gap
We identify the gap between AI’s narrative and reality.
- What are CEO X’s actual views?
- What was the context of the quote?
- And what has their reputation been up to this point?
Step 5: Correcting and replacing sources
The final step is to replace or respond to those negative sources. Claims can be addressed directly on Reddit, Instagram, or other platforms spreading the narrative. Structured explanations should also be published through FAQs and policies, while strengthening third-party validation.
Dig deeper: How AI changes how we respond to negative reviews and comments
A new mindset: Reputation is now an output
Focusing solely on SEO rankings is no longer enough. We need to think in terms of narrative shifts and framing. That also means thinking in terms of inputs and outputs.
Users aren’t evaluating individual pages. They’re engaging with AI-generated answers. Rather than managing what users find, we need to manage the answers AI systems deliver. That means strengthening what those systems rely on:
- Publishing high-quality first-party content.
- Earning credible third-party mentions.
- Reinforcing positive customer reviews.
- Addressing misinformation directly.
- Improving structured data.
- Maintaining accurate Wikipedia or Wikidata entries where applicable.
Contributing authors are invited to create content for Search Engine Land and are chosen for their expertise and contribution to the search community. Our contributors work under the oversight of the editorial staff and contributions are checked for quality and relevance to our readers. Search Engine Land is owned by Semrush. Contributor was not asked to make any direct or indirect mentions of Semrush. The opinions they express are their own.


