The "Clancy Effect"
⚠️When unintentional AI Slop Ruins Your Career and Business ... ⚠️
G’ay! When I was a teenager I read a lot of Tom Clancy. Submarines, geopolitics, military strategy - it all sounded so amazing! I lapped it up. I believed every single word. Then one day he wrote about computers. And I knew about computers. And he was talking what, nowadays, clever people refer to as bollocks.
Yes: Tom Clancy, my hero, was making stuff up about computers!
And that got me thinking ... what else he’d been making up? I’d trusted him on submarines and missile systems and CIA tradecraft and geopolitics - but the only reason I’d trusted him was because he sounded like he knew what he was talking about.
I didn’t know any better.
I call this the Clancy Trap.
And it’s one of the big issues I think about when I think about AI, and how clever people use it. Don’t get me wrong - I love using AI, and I love how it’s helped make me so much more productive, but sometimes it’s genuinely brilliant and sometimes it’s talking, um, bollocks.
The problem is ... which is which?
I keep coming back to a thinker I stumbled across when I did my MBA - a Canadian psychologist called Elliott Jaques. He developed something he called The Requisite Organisation, based on decades of studying how organisations actually work in the real world. His big idea somehow snuck into my brain and never escaped - I use it with my clients when they’re restructuring and hiring.
Jacques’ big idea?
It’s simple: the level you sit at in an organisation should match how far ahead - in time - you can think.
Think of your local supermarket:
The checkout operator thinks in minutes.
The store manager thinks in months.
The CEO thinks in years.
And here’s where it links up with the Clancy Trap. Jaques said that each level can evaluate the work of the people below them, but probably not above them. The store manager knows if the roster works, but ask them to evaluate the CEO’s strategy and they’re lost - though they may still offer an opinion! It’s above their pay grade.
The hierarchy works for two reasons. People at each level can make good decisions within their time frame. And they can judge the work of the people below them. Take either of those away and the whole thing falls apart.
The uncomfortable bit
And here’s the uncomfortable bit: if you use AI to produce work above your pay grade, the people at that pay grade will see straight through it. You won’t know it’s shallow. They will. You will look like a dork.
Back to AI.
When we use AI, we’re delegating work to something else. And that’s good, when you can judge the response.
I’m sitting here too-ing and fro-ing (co-creating) with Claude, and it’s helping me write this (so far we’ve spent about 120 minutes on it, so it’s not always a fast process), and you’re reading my ideas, that I’ve fine tuned and improved with Claude’s help, using my judgement.
My judgement - I know when our work is good, and I know when it’s not.
But what if I asked Claude to write something about ... I don’t know ... submarines or missile systems or CIA tradecraft or geopolitics?
Stuff above my pay grade?
Or what if I got it to write a paper that would make Peter Drucker proud?
How would I know it’s good, or not?
I could ask it for a strategic analysis, or a financial model, or a paper that reads like someone much smarter than me wrote it.
It’d be no different to me at fifteen, reading my Tom Clancy, nodding along at the submarine bits.
The output might be brilliant. It might be bollocks. And the whole point of the Clancy Trap is that I wouldn’t know which.
This isn’t a new problem. We’ve always had it with confident, well-dressed people. Our System 1 brains - I wrote about this recently here - substitute the hard question “is this person competent?” with the easier question “do they look confident?”
We end up confusing confidence with competence.
It’s like our brain thinks that dressing someone up in a $3,000 suit adds 30 IQ points. It doesn’t, of course. But good luck convincing your System 1 of that.
(Over the years, I’ve developed a simple heuristic that I don’t trust well-dressed confident people as much as I trust scruffy, awkward people, and it helps. Both have to earn the trust.)
So what can a sensible, caring person do to avoid the Clancy Trap?
First - build a thinking habit
Keep reminding yourself that well-dressed ideas aren’t necessarily good ideas. AI output looks professional because it always looks professional - that’s it’s thing.
If it helps, memorise the name - “The Clancy Trap” - because naming something turns it into a real thing and make it easier to remember.
Second - pull out your red pen
when you’re using AI to think through hard stuff, ask yourself: could I red-pen what just came back? Could I spot where it’s shallow, or wrong, or where it just sounds right but isn’t? If you can’t, you’re in the Clancy Trap.
This is good, solid, System 2 thinking. You’re good at this, just make sure you do it!
Third - Ask the big question!
This is the big one - ask yourself whether you should be delegating this stuff upward at all.
Just because AI will have a go at anything doesn’t mean you should let it.
Sometimes the right answer is: this is above my pay grade, and getting a confident-sounding response doesn’t change that.
And finally ...
Don’t forget that you have got a boss, or a bosses boss, or access to other experts.
Go talk to them.
Hope you’re doing well!!!! Hope this helps.
In other news, I might be writing a book called Cleverer, Fasterer, Stupider, which is my bestest attempt to help leaders and their leadership teams adapt to the new AI era, from a ToC angle, in < 10,000 words. No prompts! No agents! Just a relentless focus on not doing this to their brains: 🤯.
Clarke
p.s. hit reply now! Say hi! 👋


