The Thing Nobody Put a Name On
Ten years ago every IT team had a Shadow IT problem. People installing unapproved apps on their work laptops. Sales teams running parallel CRMs in spreadsheets. Designers using personal Dropbox accounts to share client files.
It was fine until it wasn't.
The Shadow IT problem eventually got solved — not perfectly, but structurally. Approved tool lists. Single sign-on. Security audits. Most agencies now have a pretty good idea what software their team actually uses.
We're now in the same place with AI. Except nobody's named it yet.
52% of Your AI Activity Is Invisible to You
Spark AI's 2026 report is blunt about this. Across the UK agency sector, 52% of AI activity inside agencies is classified as informal — individual curiosity, no company-wide standards, no approved tools, no defined expectations.
Six months ago that was general experimentation. Now it's normalised shadow AI.
Which means more than half of your team's AI-assisted work — including work that ends up in deliverables clients pay for — is happening inside personal accounts, on personal logins, with no oversight, no data boundaries, and no record of what was fed to which model.
Read that back. Think about who owns the output. Think about where the data went.
The Two Risks Leaders Keep Missing
Most conversations about Shadow AI get stuck on the obvious risk: data leaks. "Don't paste the client's strategic plan into ChatGPT."
That's real. But it's not the biggest problem.
Risk one: your contracts don't match reality. The Wow Company's 2026 Benchpress survey — cited in the Spark report — found only 25% of agencies have updated their client contracts to reflect AI usage. 75% are operating on terms that don't mention it at all. That's contractual ambiguity your clients will eventually exploit. Procurement teams already are.
Risk two: your IP walks out the door. When a team member builds a custom prompt chain or an automated workflow in isolation, the efficiency is theirs. The institutional knowledge is theirs. If they leave, both leave with them.
This one's quietly destructive. Every agency I've worked with that's been through M&A has hit it. Buyers ask: "Show us your AI capability." The answer that matters is "it's documented, systematised, and owned by the business." The answer that kills the valuation is "it's in the head of three people we hope stay."
The Paradox of Confident Teams
Here's the part that sounds counterintuitive but shows up in the data every time.
The more capable your team gets with AI, the further they push into territory where no rules exist. That's when the exposure grows. The beginner who uses AI to summarise a meeting is not the risk. The power user who built an automated client briefing pipeline is the risk — because they're operating somewhere the policy was never written to reach.
The Spark report flagged this specifically: interest in risk management, IP, and data governance among agency staff has surged 50% in the last six months. Your team is asking for guardrails. Leadership silence is creating the exposure.
The One-Page Fix
Governance does not need to be a 40-page document. It needs to be one page that your team will actually read.
Four things:
- Approved tools. Which models and platforms are allowed for client work. Which are allowed for internal work. Which are not allowed at all.
- Data boundaries. What can be processed through AI. What cannot. Specifically for client assets, confidential briefs, and PII.
- Human oversight requirements. Where a human must review before output leaves the agency. Where they don't need to.
- Disclosure standards. When you tell clients AI was involved. What that conversation sounds like. What's in the contract.
One page. Reviewed quarterly. Posted somewhere everyone sees it.
Alongside it, a traffic-light system in your project management tool. Green for AI-friendly work. Amber for internal use only. Red for confidential briefs where no AI is involved. This takes the decision off the individual — which is exactly where it shouldn't be sitting — and makes it organisational.
Why This Now
Formal AI governance used to be a nice-to-have. It isn't anymore.
Spark's data shows formal AI policy is now a baseline client expectation. Clients are asking for it in procurement. Buyers are asking for it in due diligence. And your team, quietly, is asking for it too — because they know they're carrying the exposure and they want the guardrails.
The agencies that systematise this win twice. They satisfy clients who are increasingly demanding operational maturity. And they build the kind of agency a buyer or growth partner actually wants to partner with.
The ones that don't keep carrying the risk on individual shoulders. Until the day it stops being shadow and starts being headlines.
Name it. Write the page. Post it.

