The Vending Machine Problem
Most people use AI like a vending machine. Put in a prompt. Get an output. Accept whatever comes out. If it's not great, try a completely different prompt. Repeat until something's good enough.
This is the single most common pattern I see when I train teams. It doesn't matter if they're junior designers or senior strategists — the default behavior is to accept the first response. Maybe they'll ask a follow-up. Maybe they'll try rephrasing. But systematic iteration? Pushing back on the AI? Telling it specifically what's wrong and how to fix it? Almost nobody does this naturally.
Anthropic published research analyzing nearly 10,000 conversations. The finding that matters most: users who iterate — who refine, challenge, and build on AI responses rather than accepting them — are 5.6 times more likely to question the AI's reasoning. They're 4 times more likely to identify missing information. And they produce dramatically better outputs.
5.6 times. That's not a marginal improvement. That's a different category of user.
What Iteration Actually Looks Like
Let me show you the difference with a real example from a training session.
The vending machine approach: "Write me a creative brief for a campaign targeting Gen Z for a sustainable fashion brand."
They get a generic brief. It's fine. It has the right sections. It doesn't have any insight. They copy it into a doc and start editing manually.
The iterative approach: Same starting prompt. But then:
"This is too generic. The brand's differentiator is that they use deadstock fabric exclusively. Rewrite the brief to make that the central strategic tension — the idea that sustainability and style are usually positioned as a tradeoff, but deadstock eliminates the tradeoff."
Then: "The target audience section reads like a demographics textbook. These are 22-year-olds who buy vintage because they want to be different, not because they care about sustainability as a cause. Rewrite to reflect that psychographic."
Then: "Good. Now pressure-test this brief. What would a skeptical creative director push back on? Where is the brief asking for something the brand can't deliver?"
That's three rounds of iteration. Total time: maybe 8 minutes. The output is unrecognizable from the first response. Not because the AI got lucky — because the human got specific.
Why People Don't Iterate
If iteration is so powerful, why don't people do it? Three reasons.
They don't know the output could be better. If you've never seen what a well-iterated AI output looks like, the first response seems fine. You have no reference point for what "great" looks like. This is why training that includes examples of before-and-after iteration is so much more effective than training that just teaches prompt formats.
They treat AI as an oracle instead of a collaborator. The mental model matters. If you think of AI as a system that gives you answers, you accept those answers. If you think of AI as a collaborator that gives you drafts, you iterate on those drafts. Changing the mental model changes the behavior.
They haven't been trained in evaluation. Iteration requires knowing what's wrong with the output. That's an evaluation skill. Most people can feel that an output isn't quite right, but they can't articulate why. And if you can't articulate why, you can't give the AI useful feedback.
Only 30% of users tell AI how they want it to behave. That means 70% are accepting whatever voice, tone, format, and approach the model defaults to. They're not driving. They're riding.
The Iteration Framework We Teach
In our programs, we teach a four-step iteration loop that works for any AI task:
1. Generate. Give the AI your initial prompt with as much context as possible. System prompt, examples, constraints. Don't start with a lazy prompt and expect iteration to save you — start strong and iterate from strength.
2. Evaluate. Read the output critically. Not "is this good?" but specifically: What's missing? What's generic? What doesn't match my standards? What would my client/boss/audience push back on?
3. Direct. Tell the AI exactly what to change and why. "The tone is too formal" is okay. "The tone reads like a press release. Make it sound like a smart friend explaining something over coffee" is 10x better. Specificity in feedback produces specificity in output.
4. Stress-test. Once the output is close, ask the AI to find the weaknesses. "What's the strongest counterargument to this?" or "Where would a skeptic poke holes?" This is the step most people skip, and it's the step that separates good from dangerous.
Three rounds through this loop — about 10-15 minutes — will produce output that's better than what most people generate in an hour of manual work.
The Informal Problem
Here's something else the research uncovered: roughly half of AI activity in organizations happens informally, without team sharing. People are iterating (or not) in private, building personal techniques (or not), and none of it becomes organizational knowledge.
This is how you end up with a team where one person is getting incredible outputs and everyone else thinks AI is mediocre. The difference isn't the tool — it's the technique. And the technique isn't being shared because nobody has a system for sharing it.
This is why we build shared prompt libraries and documented workflows into every training engagement. Not as a deliverable that sits in a folder — as a living system that teams contribute to. When someone discovers an iteration pattern that works, it goes into the library. When someone builds a system prompt that produces great output, it becomes a team resource.
Individual iteration makes one person dangerous. Systematic iteration makes the whole team dangerous.
The Skill That Transfers
Here's the thing about iteration that makes it worth teaching: it's not just an AI skill. It's a thinking skill.
The ability to evaluate output critically, give specific directional feedback, and stress-test conclusions is valuable in every part of professional life. When I train someone to iterate with AI, they get better at giving feedback to humans too. They get better at evaluating creative work. They get better at briefing.
AI is just the fastest feedback loop we've ever had for practicing these skills. You can iterate ten times with Claude in the time it takes to give one round of human feedback. That compression makes it the best training environment for critical thinking that's ever existed.
Most AI training teaches people to use a tool. Iteration training teaches people to think. One of those lasts.

