AI Marketing Tools Aren't the Problem. The Brief Is.

Marketing teams keep evaluating AI tools and getting disappointed. The tool is not the problem. The brief is. Here is what a real brief actually contains.

Most marketing teams have now run the same experiment. They subscribe to an AI tool — sometimes three. They use it for a month. They produce some campaign concepts, some social copy, some draft emails. The work is faster than it used to be. The work is also worse than the team's best human-only work, and only marginally better than the team's worst human-only work. Within six months, the tool either gets quietly canceled or gets relegated to the kind of low-stakes tasks no one was excited about doing in the first place.

The team's conclusion is almost always the same: the tool was not as good as the marketing claimed.

The conclusion is wrong. The tool was usually fine. The brief was the problem. And until marketing teams start treating the brief as the variable instead of the tool, the next AI subscription will fail in the same way the last one did.

The Tool Stack Delusion

There is a particular kind of magical thinking that has settled over the marketing industry in the last two years. It goes: the right AI tool, configured correctly, plugged into the right workflow, will produce the kind of work that makes a brand distinctive.

Every step of that sentence is wrong, but the most wrong step is the first one. There is no right tool. There is no configuration. There is no workflow that, on its own, produces distinctive work. Distinctive work is produced by a brain — human, machine, or hybrid — operating from a clear position with a sharp brief. Take the position away, take the brief away, and the most expensive tool stack in the world will produce the same average output as the cheapest one.

The tool stack delusion is appealing because it lets a marketing team avoid the harder question. The harder question is: do we actually know what we are asking for, and do we know it precisely enough that someone — or something — could deliver it without having to guess?

Most teams do not know. Most briefs are written for a colleague who already knows the brand, who has sat in the room for the strategy conversations, who can fill in the unspoken context. AI cannot fill in the unspoken context. AI takes the brief at its word. If the brief is vague, the output is vague. If the brief assumes a worldview the AI does not share, the output is generic.

This is not the AI's fault. It is the brief's fault. And the brief is something the team owns.

What a Real Brief Contains

A brief good enough to get distinctive work out of an AI contains four things that most marketing briefs do not.

A position, not a topic. "Write a LinkedIn post about our product launch" is a topic. "Write a LinkedIn post that argues our product launch matters because the category has been getting a specific thing wrong for a decade, and we are the only company with a credible answer" is a position. Topics produce summaries. Positions produce arguments. Brands are built by arguments.

A refusal list. Most briefs say what they want. The best briefs also say what they will not accept. "Do not open with a question. Do not use the word 'unlock.' Do not end with a soft call to action. Do not produce anything that could be confused with a competitor's work." A refusal list is not micro-management. It is the perimeter of the brand's voice, made explicit. AI respects perimeters when they are stated. AI defaults to the average when they are not.

The reader's prior. Almost no brief tells the AI what the reader already believes about the topic before reading. This is the single most expensive omission. A piece of writing that does not know what the reader already thinks cannot land — because it does not know what it has to overcome. "The reader is a marketing director who has heard a hundred AI pitches in the last year and is suspicious of all of them" is a prior. With it, the AI can write copy that earns a second sentence. Without it, the AI writes copy that assumes a fresh audience and dies in the first line.

The verdict the work has to deliver. Most briefs ask for a piece of content. Better briefs ask for a piece of content that lands a specific verdict in the reader's mind. "After reading this, the reader should believe our category has been broken in a way they had not seen before, and that we are the company most likely to fix it." That is a verdict. Once a verdict is named, every sentence in the output can be evaluated against it. Without a verdict, the output is just the AI's best guess at what content on this topic usually looks like.

These four things — position, refusal list, reader's prior, verdict — take longer to write than a typical brief. They also produce work that does not need to be rewritten. The time is spent earlier. The total time is shorter.

Why a Thinking Partner Beats a Tool

A tool executes the brief. A thinking partner pressure-tests the brief before executing it. The difference shows up in the quality of the work, but it starts in the conversation that happens before the work.

When the brief lands on a tool, the tool produces output. If the brief is bad, the output is bad. The tool has no incentive to push back, because pushing back is not what tools do.

When the brief lands on a thinking partner, the brief gets interrogated. The position gets stress-tested. The refusal list gets challenged. The reader's prior gets examined. The verdict gets restated until it is sharp. By the time the work begins, the brief is sharper than it was when it arrived — and the work that comes out of a sharper brief is, mechanically, sharper work.

This is the operational difference between an AI tool and an AI thinking partner. A tool waits for instructions. A thinking partner has a stake in whether the instructions are right.

What to Do Before You Evaluate Another AI Tool

Before subscribing to another platform, before sitting through another vendor demo, before adding another logo to the marketing-tech stack diagram, run one experiment.

Take the most important brief on your team's desk this week. Rewrite it with the four elements above. Hand the rewritten brief to whatever AI you already have access to. Compare the output to what your team produced last time, with the original brief, on the original platform.

If the rewritten brief produces noticeably better work on the same tool, the tool was never the variable. The brief was. And until the brief gets fixed, no new tool will save the work.

The market is full of AI marketing tools. The shortage is in marketers writing briefs sharp enough to deserve the tools they already have.

← Back to Field Notes

About the Author

Ben Rotnicki doesn't market products. He diagnoses problems.

Revenue-responsible and customer-obsessed, Ben works at the intersection of growth, loyalty, DTC, and B2C — not as separate disciplines, but as one continuous question: what does this person actually need, and how do we reach them when it matters?

He built Dante Peppermint because thinking tools should think back. An AI-powered stack designed not to generate content, but to pressure-test ideas, surface blind spots, and get closer to something true.

Field Notes is where that process goes public. Not polished takes — working ones. The writing isn't separate from the thinking. It is the thinking.

More about Ben →All Field Notes →