A growing number of marketers, founders, and writers are noticing the same pattern. They sit down with an AI tool. They write a careful prompt. The model produces something competent, fluent, and oddly familiar — because they have read the same answer, in the same shape, written by some other team's AI, on some other brand's blog, in the same week.
The instinct is to blame the prompt. It is almost never the prompt. The instinct is to blame the model. It is almost never the model. The pattern is not a flaw. It is the architecture of how these systems work, doing exactly what it was designed to do.
Large language models are trained on enormous bodies of human writing and instructed to produce the most statistically likely next token. The most statistically likely next token, by definition, is the one closest to the average. Averages are not interesting. Averages do not have a position. Averages cannot disagree with you, because to disagree is to occupy a specific point in opinion-space, and an average occupies no point — it occupies the gravitational center of all points.
When you ask a generic AI a question, you are not getting the model's opinion. You are getting the most popular wrong answer, smoothed.
What "Generic" Actually Means
People say AI sounds generic and mean different things by it. Three of those things are worth separating, because the fix for each is different.
Generic-as-bland. The output uses a flat, professional register. Sentences hover around the same length. Every paragraph opens with a topic sentence and closes with a soft summary. This is the model defaulting to the writing style most heavily represented in its training corpus — corporate communications, journalism, and the LinkedIn middle band. The fix is partial: better prompting, better examples, and a clear point of view in the system layer can move the needle. But the gravitational pull back to the middle is strong.
Generic-as-correct-but-unowned. The output is technically right. It contains the things you would expect on the topic. It does not say anything you could not have predicted before reading it. This is the model averaging across all the takes it has seen on the subject and producing the consensus position. The fix is harder: you have to give the model a reason to take a side, and a worldview consistent enough to know which side it would take.
Generic-as-suspiciously-familiar. The output reads as if you have seen it before, because you have — three other brands posted something nearly identical last week. This is the model converging with every other instance of itself being prompted on the same topic at the same time. The fix is structural: you cannot win this game with the same defaults everyone else is using. Either you give the AI a position no one else has installed, or your output joins the convergence.
The first kind of generic is a tone problem. The second is a thinking problem. The third is a positioning problem. Most teams treat all three as tone problems, which is why most teams' AI output continues to sound generic.
Why Averaging Is a Feature, Not a Bug
It is tempting to read the above as a critique of large language models. It is not. Averaging is what makes these models work. It is the reason a single model can answer questions across cooking, contract law, and astrophysics with reasonable competence. The default of "produce the most likely answer" is what gives the technology its breadth.
The problem is that breadth and brand voice are opposites.
A brand voice is a refusal of breadth. It is the deliberate narrowing of possibility space, the decision that out of all the things this brand could say, it will only say the things that come from this position. A brand without that narrowing is not a brand. It is a category. And a model defaulting to its training average will always produce category-level work, not brand-level work, because the training average is the category.
The model is not failing when it gives you the same answer it gave everyone else. The model is succeeding at exactly the task it was built for. The problem is that the task it was built for is not the task you actually want done.
What It Takes to Get a Non-Average Answer
A model produces non-average output when something in the system has been deliberately tilted away from the default. There are three places that tilt can happen, in increasing order of effectiveness.
The weakest place is the prompt. A clever prompt can pull a model toward a more specific register for a single response. The next response, with a slightly different prompt, will drift back toward the default. Prompt-level tilt does not survive context.
The middle place is the example set. Showing a model examples of the kind of work you want, in-context, can shift the output meaningfully. This is also where most "training" attempts live, and it is why most attempts feel like wallpaper — the moment the prompt drifts away from anything in the example set, the default returns.
The strongest place is the system layer. When the model has been given a worldview that operates underneath every prompt — a position from which to think, not just a set of words to imitate — the output stops drifting toward the average. Not because the model is no longer averaging. Because the average is now being computed across a different, narrower, more specific set.
The model is still doing math. You have just changed which numbers it is doing math on.
The Test
There is a single test for whether the AI you are using has been pulled out of the average. Ask it to take a position on something contested in your category. Then ask it to defend the position against the strongest argument on the other side.
A model still operating from the average will produce a balanced overview. It will present "both sides." It will conclude with something gentle about how the right answer depends on context.
A model with a real position will tell you which side it is on, and why, and where it thinks the opposition is wrong. It will not hedge. It will not "however." If pressed, it will hold the line.
Most AI cannot do this, because most AI was not built to. Generic answers are not a sign that AI is not ready. They are a sign that the AI you are using is doing what it was built to do — and that what you actually need is an AI built to do something else.