A friend of mine called me last month. She works in a mid-size organization—the kind with a VPN, a locked-down laptop, and compliance training every January.
"Marta," she said. "They just gave us AI."
"That's amazing!" I said.
"No," she said. "You don't understand. They gave us AI. Then they disabled internet access. Then they restricted what the agents can do. No actions. No automations. No tool use."
I paused. "So what can it do?"
"It can read. It can chat. It can summarize."
"So they bought you a 2023 chatbot."
"They bought us a car," she said, "then removed the engine and the steering wheel."
She paused.
"The radio still works, though."
I laughed so hard I nearly choked on my coffee. But here's the thing—after I stopped laughing, I realized I'd been hearing versions of this story everywhere. From colleagues at conferences. From messages in my inbox. From knowledge workers who keep reaching out to say: "We have AI now, but we can't actually use it."
What's fascinating isn't the technology. It's the behavior. Because this pattern is everywhere—and it has very little to do with whether AI "works."
It has to do with power.
The Problem Nobody Defined
Here's what my friend described next, and it sounded painfully familiar.
Leadership didn't know what problem they wanted to solve. They wanted AI. Period. Not a specific bottleneck. Not a measurable outcome. Not a workflow redesign.
Just… AI. Because everyone else has AI. Because the board asked about AI. Because there was a budget line item for AI.
When you don't define the problem, you don't design a solution. You perform an initiative.
And performing an initiative looks like: purchasing licenses, forming a working group, talking about "agents" in meetings, and planning to buy yet another SaaS platform—without ever answering the question that matters most:
What specific thing are we trying to fix?
The Thing They Couldn't Hear
Here's what nobody in my friend's leadership meetings seemed to notice: their knowledge workers had already been solving problems for years. Just not with AI.
When official tools don't match reality, people build workarounds.
Spreadsheets become dashboards. Email threads become approval systems. Personal folders become repositories. Unofficial templates become "the real standard."
If you've been reading my blog for a while, you know I call these shadow systems. And they're not a rebellion. They're survival.
They're also a signal. A loud, blinking, neon signal that says: the official workflow is too slow, too rigid, too fragmented—or too detached from real work.
My friend's organization had dozens of these shadow systems. People had been quietly building solutions for years. But when leadership launched their AI initiative, nobody thought to ask: "What are our people already building? What problems have they already identified?"
(Because of course they didn't.)
Why This Time Is Actually Different
Here's where my data scientist brain gets excited.
Knowledge workers possess something incredibly valuable: operational knowledge. They hold the real map—the exceptions, the constraints, the edge cases, and the parts that always break.
For the first time in history, AI makes it possible for knowledge workers to turn that operational gold into actual tools.
Not giant enterprise software that takes eighteen months to deploy. Small, surgical solutions:
These aren't fantasies. I've built tools like these myself. I know the journey from "I have an idea" to "it's deployed and people are using it." It's messy, it's frustrating, and it's absolutely possible.
But only if the organization lets people drive the car.
The Uncomfortable Truth About Unofficial Power
This is the part that explains the pattern I keep hearing about.
When a knowledge worker builds a solution that works—really works—something subtle happens. They create what I've started calling "unofficial power."
Because when someone can build a solution in a week—without waiting six months for procurement, without filing seventeen tickets, without three rounds of committee approval—the organization's traditional control system gets challenged.
Not deliberately. Not maliciously. Just… structurally.
And instead of creating a safe lane for that building energy, many organizations do the defensive thing:
They centralize decisions. They lock down permissions. They buy "approved" tools that nobody asked for. They create committees to manage the uncertainty.
They don't want distributed capability. They want sanctioned capability.
So they buy the car… and remove the parts that make it drive.
The Proposal That Couldn't Land
My friend tried something. She proposed a pilot program.
Simple concept: pick a few champions. Train them to identify root causes. Build small targeted solutions. Create a playbook. Include safety training. Measure results and repeat.
In other words: learn how to solve problems with precision instead of hoping a platform will "fix" a broken workflow.
The working group didn't reject the approach aggressively. They didn't argue against it. They just… couldn't hear it.
The only change they made? Meeting two hours a week instead of thirty minutes every two weeks.
They understood meeting time. They didn't understand building capability.
That distinction broke my heart a little, honestly. Because I've been there—that moment when you have the answer and nobody's listening. Not because they're bad people. Because the answer doesn't fit the shape of what they expect a solution to look like.
The Five Questions That Reveal Everything
After hearing my friend's story (and several others like it), I started thinking about what separates real AI rollouts from what I'm now calling AI theater.
Here are five questions that tell you everything:
1. What specific problem are we solving? Not "we need AI" but "we need to reduce the time it takes to reconcile monthly reports from three days to three hours."
2. What measurable outcome will change in 30 days? If you can't name it, you're performing, not solving.
3. What data is allowed, and where does it flow? This isn't optional. This is the foundation. (My Episode 4 people know what I'm talking about.)
4. Who owns the process after launch? Not "the committee." A person. With a name.
5. What gets shut off if it doesn't work? If there's no exit criteria, there's no accountability. Just a perpetual "initiative."
If an organization can't answer those five questions, it doesn't have an AI program.
It has AI performance.
The Final Irony
Here's what keeps me up at night about stories like my friend's.
Restricting capability doesn't eliminate shadow systems. It usually just makes them quieter, more fragmented, less secure, and harder to audit.
So the organization gets the worst of both worlds: no real innovation *and* unmanaged workarounds.
The choice isn't control versus chaos.
The real choice is invisible chaos versus visible, governed innovation.
What I Believe
I've said this before and I'll keep saying it: knowledge workers have gold. Operational knowledge that no consultant, no vendor, no AI platform can replicate.
AI is the first tool that lets them turn that gold into working systems.
The organizations that win won't be the ones who bought AI.
They'll be the ones who learned how to safely let people build.
My friend? She's still showing up to the meetings. Still taking notes. Still listening. She told me she's done spending political capital on arguments that don't land.
But she's also still building where she's allowed to build.
Because the people who figure this out? They won't be waiting for permission. They'll be the ones who already shipped something while the committee was still scheduling the next meeting.
Still listening to stories like these, still believing knowledge workers deserve better tools, still building anyway.
P.S. To my friend who inspired this post: hang in there. The radio works for now. But one day, someone's going to notice you already know how to drive.
P.P.S. If your organization bought you a car without an engine, come tell me about it. I'm collecting these stories. There's a pattern here, and it matters.