How to Train an AI on Your Brand Voice (Without Losing What Makes It Yours)

Most teams "train" an AI by uploading a style guide and a few past campaigns. That is not training. That is reference material. Here is what real training actually looks like.

Most teams say they want to train an AI on their brand voice. What they actually do is upload a style guide and three past campaigns, and then ask the model to write something new. The output sounds vaguely like the brand. It also sounds vaguely like every other brand that uploaded a style guide and three past campaigns.

That is not training. That is reference material. And reference material, on its own, produces an average of what was referenced.

A brand voice is not a tone. It is not a list of words you use and words you don't. It is not the difference between "we" and "you." A brand voice is a position. It is a worldview the brand has decided to take, and the consistent linguistic posture that follows from holding that position over time. Tone falls out of position. Vocabulary falls out of position. Sentence rhythm falls out of position. None of it can be uploaded after the fact, because the position is not a document. It is an architecture.

If you are training an AI on your brand voice and the AI is producing work that sounds like everyone else's, the problem is not that you uploaded the wrong PDF. The problem is that you uploaded a PDF at all.

The Wrong Way: Documents In, Average Out

Here is the dominant pattern in the market right now. A team takes their brand book, their messaging framework, their last six campaigns, and a transcript of the founder's most recent podcast appearance. They paste all of it into the context window or load it into a custom GPT. They prompt: "Write a LinkedIn post about our new product launch in our brand voice."

The model reads everything provided. It then averages across that material — because averaging is what large language models do. The output is technically faithful to the source. It is also technically faithful to every other brand on the platform that did the same thing, because the model's underlying defaults are far stronger than any brand's reference material. The brand fingerprints get smoothed out. What is left is a competent LinkedIn post in a generalized professional register, lightly inflected by the uploaded files.

This is the moment most teams say "AI just isn't good at brand voice yet." The model is fine. The method is wrong.

What Real Training Actually Requires

Training an AI on a brand voice is a question of what the AI thinks with, not what the AI looks at.

A model that has been given a position will produce work from that position even when the prompt is unrelated to anything in its reference material. A model that has been given reference material will produce work that approximates the reference material when the prompt is close to it, and produce default work when the prompt drifts. The first one has a brand voice. The second one is a search engine with a vocabulary list.

To give a model a position, three things have to happen.

First, the brand has to actually have one. This sounds obvious and almost never is. Most brands have a tone, a palette, and a tagline. They do not have a thesis about what is wrong with the category they compete in, what they refuse to do that competitors do, and what they believe about their customer that the customer would not say about themselves. Without a thesis, there is nothing to train. The work in this stage is editorial, not technical. If you cannot finish the sentence "we are the only brand in our category that believes ___," you do not have a brand voice. You have a logo and a feeling.

Second, the position has to be expressed as a worldview, not a list of rules. "Don't use exclamation points" is a rule. "Restraint is a form of confidence — a brand that needs to shout has not earned the room" is a worldview. A model given the worldview will produce restrained copy in situations the rules never anticipated. A model given the rules will follow them mechanically and produce flat work when the rules don't cover the case.

Third, the worldview has to be installed at the level of architecture, not context. Stuffing the worldview into a prompt is better than nothing. Building it into the system layer of the AI — the part that shapes every response, not just the next one — is the difference between a costume and a character.

The Test That Matters

There is a single test for whether you have actually trained an AI on your brand voice. Ask it to disagree with you.

A model with a position will tell you when your idea contradicts the position. It will push back. It will hold the line on the brand's worldview even when the easier move is to give you what you asked for. A model with reference material will agree with you, because reference material does not generate friction — only positions do.

If you ask your AI to write a campaign in the brand voice and it produces something polished and on-message, that proves nothing. If you ask your AI to argue against your campaign idea on the grounds that it betrays the brand's worldview, and the AI mounts a real argument — then you have trained it. Until then, you have decorated it.

Where This Leads

The most useful thing an AI can do for a brand is not to produce more work faster. It is to produce work the brand can stand behind without having to rewrite. That requires the AI to be operating from inside the brand's point of view, not approximating it from a folder of examples.

Reference material is not training. Style guides are not voices. The difference between an AI that understands your brand and an AI that has read about your brand is the difference between a collaborator and a search bar.

Train for position. Test for friction. Everything else is decoration.

← Back to Field Notes

About the Author

Ben Rotnicki doesn't market products. He diagnoses problems.

Revenue-responsible and customer-obsessed, Ben works at the intersection of growth, loyalty, DTC, and B2C — not as separate disciplines, but as one continuous question: what does this person actually need, and how do we reach them when it matters?

He built Dante Peppermint because thinking tools should think back. An AI-powered stack designed not to generate content, but to pressure-test ideas, surface blind spots, and get closer to something true.

Field Notes is where that process goes public. Not polished takes — working ones. The writing isn't separate from the thinking. It is the thinking.

More about Ben →All Field Notes →