Keeping your brand voice alive in the age of AI content
The voice problem
Forty percent of knowledge workers received AI-generated content from colleagues last month that they had to spend time fixing. Stanford and BetterUp researchers gave this phenomenon a name: workslop. Content that looks polished but carries no thinking behind it.
The quality problem is real. But the harder problem is what happens when workslop goes external. When your blog posts, emails, and marketing copy all sound like they came from the same polite, competent, slightly generic assistant, you don't just lose quality. You lose voice. And voice is the thing that makes someone read your next piece instead of scrolling past it.
AI adoption in content creation is not slowing down. EY's 2025 Work Reimagined Survey found that 88% of employees use AI in their daily workflow, though only 5% use it in ways that go beyond basic search and summarization. That 5% treats AI differently. They iterate. They reject first drafts. They maintain a clear division between what the machine handles and what they own.
This playbook is for the other 85% who want to get there. It walks through a practical process for using AI to create content that sounds like you wrote it, because in all the ways that matter, you did.
The 15 tells
AI-generated text follows patterns. Some are artifacts of how models are trained. Some come from reinforcement learning that optimized for sounding helpful and authoritative. Some are just statistical tendencies: certain words and structures appear in training data so frequently that the model reaches for them first.
These patterns shift as models improve. What follows is a February 2026 snapshot. Pull up the last piece of content you published with AI assistance and check it against this list.
1. The em dash epidemic. Count the em dashes on your page. If there are more than one or two, you probably had help. Models use em dashes the way nervous speakers use "um." It's a connector that lets the model wedge an aside into any sentence without committing to a new one.
2. The vocabulary fingerprint. Every model has favorite words. Researchers call these "aidiolects." Right now, the hot list includes: delve, tapestry, landscape, realm, robust, seamless, elevate, streamline, embark, harness, and treasure trove. If three of those appear in your draft, your draft has a tell.
3. The "not X, but Y" formula. "It's not about working harder. It's about working smarter." This negation-then-correction structure is a model favorite because it creates a rhythm that feels insightful. In reality, it's a pattern that substitutes structure for substance.
4. Compulsive tricolons. Groups of three: "Educate. Empower. Execute." One tricolon in a piece is fine. When every section ends with a three-beat cadence, the template shows through.
5. The opening cliche. "In today's fast-paced world..." "In an era of..." "Have you ever wondered..." Models open this way because their training data is full of articles that open this way. It reads like the first draft it is.
6. Uniform sentence length. Read three consecutive sentences. If they're all roughly the same length and rhythm, something mechanical happened. Human writing has natural burstiness. Short punch. Longer explanation. Fragment.
7. Formal stiffness. "You will appreciate the comfort this provides" instead of "you'll love this." Models default to formal register. They avoid contractions, colloquialisms, and the kinds of shortcuts real people use when they're not thinking about grammar.
8. The hedge pile. "It's important to note that..." "Generally speaking..." "From a broader perspective..." These are the model's way of sounding careful. In practice, they're filler. A confident writer picks a position.
9. Fictional specificity. When a model needs an example and doesn't have one, it invents a person. The name is often conspicuously diverse-sounding. "Sarah Chen, a marketing director in Denver" is doing the work of a case study without any of the credibility.
10. Explaining the obvious. "Tokyo, the capital of Japan, is a bustling city." Models over-explain because they can't assess what the reader already knows. If your content defines terms your audience uses daily, the model is writing for a different audience than you are.
11. No point of view. The clearest tell of all. AI writing covers every angle, hedges every claim, and wraps every paragraph in a neat bow. It reads like an encyclopedia entry, not a perspective. Real writing picks a side, makes an argument, leaves something out on purpose.
12. Rhetorical fragment addiction. "The result?" "The problem?" "And what's worse?" Models use these fragments as transitions far more often than any human writer would. One per piece is fine. Five is a fingerprint.
13. Hyphenated compound adjectives. "Data-driven." "Human-crafted." "Well-documented." "Thought-provoking." Models reach for these because they pack description into a single modifier. The density is the tell.
14. Random bold text. When producing longer-form content, models sometimes bold key phrases within paragraphs. If you're copying AI output into your CMS and seeing unexpected bold text, that's not emphasis you chose.
15. Perfect and forgettable. The grammar is clean. The structure is sound. The content says nothing you couldn't find in ten other places. This is the meta-tell: competent output with no personality. AI writes like someone who has read about your subject but has no opinion about it.
Score yourself. If your last published piece triggers five or more of these, the AI's defaults are leaking through. The rest of this playbook is about plugging those leaks.
Building your ban list
The tells above are universal. Your ban list is personal. It's the specific words, phrases, and patterns you feed to the AI before every drafting session, with the instruction: never use these.
Start with the vocabulary fingerprint. Pull the words from tell number two and add any others you've noticed in your own AI output. Write them down. Not in a document you'll forget about. In the actual prompt template you use every time you start a session.
Then add structural bans. "Never open with a question." "Never use more than one em dash per page." "Never group things in threes unless the content requires it." These are not style preferences. They're pattern-breakers. Every structural ban forces the model out of its most common path.
Finally, add voice bans. These are specific to your brand. If your brand voice is direct and casual, ban formal hedging language. If your brand voice is technical and precise, ban vague superlatives. The bans should mirror the gap between how AI naturally writes and how your brand actually sounds.
A practical ban list for a direct, plain-spoken brand might look like this:
Never use these words: delve, tapestry, landscape, realm, robust, seamless, elevate, streamline, embark, harness, unlock, journey, ecosystem, empower, comprehensive, pivotal, crucial, transformative, holistic, bespoke.
Never use these phrases: "In today's [anything]," "It's worth noting that," "At its core," "In the realm of," "Furthermore," "Moreover."
Never use more than one em dash per page. Never open a paragraph with a rhetorical question. Never use the "not X, but Y" formula. Never group items in threes unless the list genuinely has three items.
Never hedge without a reason. If you're uncertain, say you're uncertain. Don't pad sentences with qualifiers to sound balanced.
This list goes in your prompt. Every session. It doesn't prevent the model from producing good content. It prevents the model from producing content that sounds like a model.
Update the list monthly. New vocabulary tells emerge as models update. Patterns you banned six months ago might no longer be the defaults. The ban list is maintenance, not a one-time setup.
Teaching AI your voice
A ban list tells the model what not to do. Voice training tells it what to do instead. The difference between these two is the difference between "don't sound like AI" and "sound like us."
Most voice instructions fail because they're vague. "Write in a casual, friendly tone" gives the model almost nothing to work with. Casual how? Friendly like a customer service bot, or friendly like a colleague who's seen some things? The model fills vague instructions with its defaults, which is exactly what you're trying to avoid.
What works is specific, annotated examples.
Provide before-and-after pairs. Show the model a generic sentence and the version you'd actually publish. "We help companies optimize their workflows" becomes "We figure out what's broken and fix it." Annotate why: "The first version is company-speak. Nobody talks like that. The second version is what a real person would say if you asked them what they do."
Provide vocabulary anchors. Not a thesaurus dump, but a short list of words and phrases that are distinctly yours. Every brand has them. Words you use in meetings that never show up in competitors' copy. The way you describe your own work when you're not trying to impress anyone. Feed those to the model.
Provide constraints, not descriptions. "Short sentences. Simple words. One idea per paragraph. Humor only when the moment is genuine. Never clever for its own sake." This gives the model operational rules it can follow. "Write with a wry, intelligent voice" gives it a mood board that means nothing.
Provide a reference piece. Pick one published piece that best represents your voice. Give it to the model with the instruction: "This is the standard. Match the sentence length variation, the level of formality, the ratio of opinion to information, and the vocabulary register." One well-chosen reference does more than a page of style guidelines.
The quality of voice training is proportional to the specificity of the input. This tracks with Anthropic's finding that AI output mirrors the sophistication of the prompt with near-perfect correlation. Give the model shallow instructions, get shallow output with your logo on it. Give it detailed, annotated, specific voice guidance, and the output starts within editing distance of something you'd actually publish.
The content creation workflow
AI is good at some parts of content creation and bad at others. The companies getting value from AI content aren't the ones that hand off the entire process. They're the ones that drew a clear line between what AI handles and what humans own.
AI leads: research and assembly. Gathering information, summarizing sources, identifying data points, compiling background material. This is where AI saves the most time with the least risk. The output is a starting point, not a finished product.
AI leads: structural options. "Give me three different ways to organize this argument." AI is good at generating structural alternatives because structure is pattern, and patterns are what models do. You pick the structure. The model proposes options.
AI leads: first-draft generation. With a strong ban list, voice training, a chosen structure, and clear instructions about the argument you want to make, AI can produce a first draft that gets you to 60% faster than starting from a blank page. That 60% is the point. If you're expecting 95%, you'll publish something that sounds like it.
Humans lead: the argument. What is this piece actually saying? What's the point of view? What are you willing to stake a claim on? These decisions happen before the model sees a prompt. If you don't know what you think, the model will think for you, and what it thinks is the average of everything it was trained on.
Humans lead: voice and emotional texture. The moments that make someone stop scrolling. The sentence that lands because it's unexpectedly honest. The paragraph that earns trust because it admits something uncomfortable. AI can approximate these things. It cannot originate them.
Humans lead: what to leave out. AI will cover every angle because it has no editorial judgment about what matters most. Deciding what not to say is a human skill. The best content isn't comprehensive. It's selective.
The workflow, in practice: research with AI. Decide your argument without AI. Outline the structure with AI. Write the first draft with AI. Then close the AI tab and edit as a human. That last step is where the value lives.
The editing gauntlet
Your first draft is done. It was fast. It's competent. It might also be worse than you think, and the reason you can't tell is the same AI that helped you write it.
Rathje et al. published a study in November 2025 that tested what happens when people interact with AI that agrees with them. Across three experiments with 3,285 participants, sycophantic AI chatbots increased people's certainty in their own positions and inflated their self-perception of intelligence and empathy. Participants rated agreeable chatbots as unbiased and critical chatbots as biased. The people who were most influenced were the ones who had no idea it was happening.
This is directly relevant to content editing. When AI generates a draft and you ask "is this good?" the AI will tell you it's good. When you ask "how can I improve this?" the AI will suggest minor refinements that preserve the structure and voice of its own output. The feedback loop is closed. The model validates its own work through you.
The antidote is editing techniques that bypass the AI's influence on your judgment.
The read-aloud test. Print the piece or put it on a separate screen. Read every word out loud. Not skimming. Not reading in your head while your lips move. Actually speaking the words. Every phrase that sounds unnatural in speech gets flagged. If you wouldn't say it in a meeting, it doesn't belong in your content.
The stranger test. Show the piece to someone who doesn't know your brand. Ask them two questions: "What kind of company wrote this?" and "What do they actually believe?" If they can't answer both with specifics, the voice isn't there yet.
The swap test. Take your company name out of the piece and replace it with a competitor's name. Does the content still work? If nothing feels wrong, the content is generic. Something in every piece should be so specific to your perspective, your experience, or your position that it couldn't belong to anyone else.
The vocabulary audit. Search the draft for every word on your ban list. Then search for the secondary tells: hedging phrases, rhetorical fragments, compound adjectives. If the ban list leaked, the model reverted to defaults somewhere during the session. Find where and fix it.
The "so what?" pass. Read each paragraph and ask: "So what?" If the paragraph is information without implication, it's filler. Every paragraph in finished content should either advance the argument, provide evidence, or create recognition in the reader. Paragraphs that just describe things are AI padding.
These techniques take time. That's the point. The time you save generating a first draft is meant to be reinvested in editing, not pocketed as pure efficiency. The companies producing the best AI-assisted content spend roughly equal time on generation and editing. The ones publishing workslop spend almost nothing on editing because the draft "looked good."
It looked good because the AI told them it looked good.
The quality gate
Before you publish anything, run it through this checklist. Every question must be answered honestly. If the AI helped you write the piece, the AI cannot help you evaluate it.
Does this sound like us? Not like a competent writer. Like us, specifically. Could a reader who knows your brand identify this as yours without seeing the logo?
Would I say this out loud? Not "would I read this out loud?" but "would I say this to a client, a prospect, or a colleague in conversation?" Written content that fails this test is performing rather than communicating.
Is there a point of view? Information is free. Perspective is what people subscribe to, bookmark, and share. If the piece presents facts without a position on what those facts mean, it's not ready.
Could any company have written this? The most important question. If you swap your name for a competitor's and nothing feels wrong, the piece has no voice. Send it back. Add the thing that only you would say.
Does every sentence earn its place? Delete a sentence at random. Does the piece suffer? If not, the sentence was filler. Repeat until every deletion would leave a visible hole.
Did the AI validate this piece? If you asked the AI whether the draft was good and it said yes, that evaluation is worthless. The sycophancy research is clear: agreeable AI inflates your confidence without improving your output. Get a human opinion or run the editing gauntlet above. Don't let the machine grade its own homework.
Content created with AI and published without this gate is a gamble. Sometimes you'll get lucky and the output is genuinely good. More often, you'll publish something that reads like everything else on the internet, and your audience will scroll past it the same way they scroll past everything else.
Voice is the only thing that makes them stop. Protect it.
More Playbooks
See all playbooksThe AI guide to the galaxy
Most AI strategies fail before they start. A practical guide for the executive who got the mandate, skipped the hype, and needs to know what actually works.
February 11, 2026
The AI innovation pipeline
Most companies go too big or too small with AI. A systematic approach to finding, scoring, and sequencing the initiatives that actually reach production.
January 21, 2026
The AI readiness scorecard
Most companies pick AI projects based on excitement. A framework for figuring out which workflows are actually ready and which ones need work first.
January 7, 2026
Get The Pepper Report
The short list of what we're writing and what we're reading.