NOTCONTENT / training
Back to Blog
Guide

When AI Gets It Wrong — And How to Know Before Your Client Does

AI fails in three predictable ways. If your team can't spot them, your client will. Here's a framework for catching AI errors before they cost you.

Jeremy Somers
Jeremy SomersFounder, NotContent·Mar 18, 2026·5 min read

The Trust Problem

AI sounds confident. Always. It delivers wrong answers with the same polish and certainty as right ones. And that's the most dangerous thing about it for creative teams.

I've watched teams send AI-generated copy to clients without catching factual errors. I've seen AI-drafted strategies built on outdated market data. I've reviewed AI-produced competitive analyses that confidently cited companies that no longer exist. In every case, the team trusted the output because it sounded authoritative.

Your client won't care that the AI made the mistake. They'll care that you didn't catch it. Understanding how AI fails — and building systems to catch those failures — is now a core professional skill.

The Three Failure Modes

AI doesn't fail randomly. It fails in three predictable patterns. Learn them and you'll catch 90% of AI errors before they leave your team.

1. The Stale Data Problem. Large language models have training cutoffs. They don't know what happened last week unless you tell them. This means AI will confidently reference outdated statistics, cite companies that have been acquired, and miss market shifts that happened after its training data.

This is the easiest failure to catch — just ask yourself "could this information have changed recently?" — and the easiest to fix. Give the AI current context. Upload recent reports. Reference specific data. Don't ask AI to tell you what's happening in the market. Tell AI what's happening in the market and ask it to analyze the implications.

2. The People-Pleaser Problem. AI wants to be helpful. This sounds benign. It's not. AI models are trained on human feedback, and they've learned that agreeable answers get positive ratings. The result: AI will tell you what you want to hear rather than what you need to hear.

I've seen this in research contexts where AI validates a strategic direction rather than challenging it, because the prompt implied the direction was already decided. Ask "Is our strategy to target millennials a good one?" and AI will explain why targeting millennials is brilliant. Ask "What are the three strongest arguments against targeting millennials?" and you'll get a completely different — and more useful — response.

The fix: always ask for counterarguments. Always ask what could go wrong. Frame prompts to invite disagreement rather than validation. If AI never pushes back on your thinking, you're not using it for thinking — you're using it for confirmation.

3. The Fake Precision Problem. AI generates specific-sounding numbers with zero accountability. "Studies show a 47% increase" — what studies? "Brands that adopt this approach see 3x ROI" — based on what data? AI fills knowledge gaps with plausible-sounding specifics, and if you're not checking sources, those specifics end up in your client deck.

I saw a study that made this painfully clear. Researchers ran AI personas through purchasing simulations, and the personas claimed they'd choose the sustainable option nearly 80% of the time. Actual consumer purchasing data? Closer to a quarter of people follow through. That's not a rounding error — it's AI telling you what sounds good instead of what's true. Build a strategy on that data and you're building on sand.

The fix: treat every number AI gives you as a hypothesis, not a fact. If you can't verify it, don't use it. And teach your team the same standard.

The Evaluation Framework

We build evaluation into every training program. Here's the framework:

Source check. Can you verify the key claims? If AI cites a statistic, can you find the original source? If it references a case study, does that case study exist? This takes 5 minutes and catches the most embarrassing errors.

Recency check. Is this information current enough for the context? If you're doing competitive analysis, data from 18 months ago might be dangerously outdated. If you're analyzing brand positioning principles, older data is probably fine.

Bias check. Is the AI telling you what you want to hear? Flip the prompt. Ask for the opposing view. Ask what a skeptic would say. If the AI's response changes dramatically, the original response was optimizing for agreeability, not accuracy.

Specificity check. Are the numbers real or generated? Any specific statistic should be verifiable. If it's not, either find a real number or present the claim qualitatively. "Research suggests significant improvement" is more honest than a fabricated "47% increase."

Tone check. Does this sound like your team? AI has a default voice — slightly formal, generically competent, aggressively helpful. If the output sounds like it could have come from any team at any company, it hasn't been directed well enough. Your team's output should sound like your team, not like AI's default persona.

Making It Systematic

The teams that consistently catch AI errors don't rely on individual vigilance. They build checking into the workflow.

Every AI-generated deliverable gets a 5-minute evaluation pass before it moves forward. That pass uses the framework above. It's not optional. It's not "if you have time." It's the final step in the workflow, the same way spell-check used to be.

Some teams pair an AI producer with a human reviewer — one person generates, another evaluates. The reviewer doesn't redo the work. They just run the five checks. This adds 10-15 minutes to any deliverable and catches the errors that would cost you the account.

The Skill Is Growing

Every team I've trained that implements systematic AI evaluation reports the same thing: they get better at evaluation fast, and they start catching errors instinctively after a few weeks. It becomes a habit, like checking your rearview mirror.

The end state isn't a team that doesn't trust AI. It's a team that trusts AI the right amount — the same way a good manager trusts a talented but junior team member. You trust the work. You verify the details. You never skip the review.

Jeremy Somers

Jeremy Somers

Founder, NotContent

15 years as a creative director (Spotify, Nike, Pepsi, Samsung, Mercedes-Benz). Built the first AI-assisted creative agency in 2022.

See where your team stands

Take the 2-minute Readiness Scorecard and get a personalized program recommendation.

Take the Readiness Scorecard →