The Best Piece on AI and Designers I've Read This Year
A recent article by Peter Zakrzewski in UX Collective — titled "Oh, but there's one more thing..." — lands on the thing most AI-era career advice keeps missing.
Go read it. Link: uxdesign.cc/oh-but-theres-one-more-thing-edb9fbd79c95
Zakrzewski's argument, in summary: designers' irreplaceable value in the AI era isn't becoming better prompt-users. It's cultivated taste — the kind of domain knowledge and judgement an AI system structurally cannot replicate. He makes the case that designers should function as the "More Knowledgeable Other" in the AI workflow, providing the evaluative faculty that decides which problems deserve solving and how solutions should evolve.
I think he's right. And I think the argument extends well beyond designers — to every creative role where craft and judgement still matter.
The Jobs Quote That Anchors It
Zakrzewski opens on a 1995 Steve Jobs quote about Microsoft: "The only problem with Microsoft, is they just have no taste... they don't think of original ideas, and they don't bring much culture into their products."
Jobs defined taste as "trying to expose yourself to the best things that humans have done" and bringing those elements into your work. He used the word "grokking" — borrowed from Robert Heinlein's Stranger in a Strange Land — to describe the kind of understanding so complete that observer and observed become unified.
That's not preference. That's structural knowledge. It's accumulated, cross-domain, culturally embedded expertise developed through lived practice — which is exactly the thing statistical training on text and images cannot produce.
Why This Matters for Creative Teams
Every agency I work with is dealing with a version of the same anxiety. The one-sentence version: "If AI can do 80% of the creative work, what are we for?"
The honest answer, informed by what Zakrzewski is arguing, is that AI is not what threatens your team. What threatens your team is treating AI as a replacement for the parts of your job that never should have been the differentiator in the first place.
Your differentiator was never "we can execute a brief faster than the other agency." That was always going to collapse under price pressure eventually. Your differentiator was always taste — the judgement that sits before the prompt is written and after the output comes back. What problem is actually worth solving for this client. What meaning the brand should propose that it hasn't yet articulated. Which of the twenty options AI just spat out is actually the one to ship.
The AI doesn't do any of that. And not because it hasn't caught up yet. Because those decisions require embodied cognition, cultural timing, and accumulated sensibility that a training dataset cannot substitute for.
Roberto Verganti's "Design-Driven Innovation"
Zakrzewski references researcher Roberto Verganti, whose work on Italian design firms (Alessi, Artemide, Kartell) identified what he called "design-driven innovation." Those firms didn't succeed through focus groups. They succeeded by proposing radical redefinitions of meaning.
The iPhone didn't win by being a better phone. It redefined what portable communication could be within lifestyle. The Wii didn't win by having better graphics. It redefined who gaming was for and how you interacted with it.
Verganti's insight: this kind of innovation comes from cultivating relationships with "key interpreters" — architects, anthropologists, artists — people working at the leading edge of socio-cultural change. It's a human-networked form of meaning-making that is not retrievable from statistical pattern-matching.
This matters for creative teams because it locates the defensible expertise. The place where AI can't compete. And it tells you where to invest.
The Three-Condition Test
One of the sharpest things in Zakrzewski's piece is a framework he borrows from physicist Savas Dimopoulos on what distinguishes beautiful ideas from merely good ones. A worthy problem must simultaneously satisfy:
- Resonance — an unmet need that genuinely matters
- Difficulty — hard enough that the obvious solutions have already failed
- Timing — the cultural moment has arrived
Zakrzewski's point: "All three conditions must hold at once."
I'd push this further. Most creative work fails for exactly the reason that one of the three conditions wasn't met — and AI cannot help you with that diagnosis. It can help you execute against a brief. It cannot tell you whether the brief is worth executing against.
What This Means for Your Team
If taste is the defensible skill, then training — real training, not tool demos — has to develop it. And most AI training programs fail at this because they're framed as "how to use the tool" rather than "how to direct work through the tool with judgement."
Three implications for how creative teams should be structured going forward.
Promote the judges, not the fastest prompters. The most valuable people on your team are the ones who can look at twenty AI outputs and pick the one that advances the work. Not the ones who can generate the twenty outputs fastest. The first is a skill that compounds. The second gets commoditised every six months.
Invest in the upstream work. Problem definition. Strategic framing. Brand intent. Zakrzewski makes the point that the iPhone team's critical decision wasn't which features to build — it was reframing the problem from "outcompete on features" to "what could a personal communicator become." AI is powerful in the solution space. Taste lives in the problem space. That's where your team's time needs to go.
Teach evaluation harder than generation. The default bias of AI training is to make people better at producing output. That matters, but it's the less important skill. The skill that matters more is teaching people to judge AI output critically — against real quality bars, real brand standards, real client fit. Generation without evaluation produces confident-sounding mediocrity at scale.
Where This Fits Our Own Methodology
At NotContent we train teams on three phases: Diverge, Converge, Systemize.
Diverge is the expansive phase — explore, try, generate volume. This is where AI is most powerful as a collaborator.
Converge is where taste enters. Lock direction. Apply judgement. Choose. This is the phase Zakrzewski is arguing for, and it's the phase most AI training programs skip entirely — they teach divergence and assume convergence will happen by instinct.
Systemize is where what you learned gets encoded into the operating model — so the taste of your best people becomes the floor for the whole team, not the ceiling for a few individuals.
The whole approach is built around the premise Zakrzewski is making. AI is great at volume. Human judgement is what turns volume into value. And the training that matters isn't training people to prompt better — it's training them to judge better, and giving them the structure to apply that judgement consistently across client work.
Read the Article
I'm adding my own angle here. The original is better. Zakrzewski's piece is one of the sharpest things I've read on AI and creative work this year — the kind of argument that should be sitting in front of every creative director thinking about what their team is actually for in two years' time.
Link again: uxdesign.cc/oh-but-theres-one-more-thing-edb9fbd79c95
If the argument lands for you, the operational question is what you do with it. Because taste that isn't cultivated, protected, and promoted inside your team will quietly erode. And the AI era rewards the teams that are training for judgement — not just for speed.

