This post is meant as a reflective continuation of the previous one Hey Chat. It’s not you, it’s me.
If that was more of a self-reflection and self-critique on my personal relationship with AI, this one aims to take it a step further: offering a critical perspective, perhaps a bit exaggerated, or perhaps not, depending on the reader’s sensitivity; on how the industry is adopting AI into its workflows, and the consequences of ignoring what I call the self-deception of borrowed knowledge. Or, more fittingly in this context: the Dunning-Kruger Effect as a Service.
A phenomenon I include myself in, hence the previous post, and one we almost always fall into unconsciously.
Borrowed Knowledge and the Dunning-Kruger Effect as a Service #
This section, obviously, is entirely personal and subjective.
If you ask, “Who can use AI?”, the answer is: “Everyone*” (Yes, that asterisk is intentional).
Why everyone* and not just everyone? Because there are always nuances. Last post, we explored those nuances from different angles. But there’s one final one I think is essential: awareness of what you truly understand or, put another way, not falling for the illusion of knowledge.
What’s the difference between a monkey using a computer and a person using AI in a domain they know nothing about? To me, it’s this: the monkey gets bored and quits. The human keeps going, self-validating and self-deceiving, believing they’re an expert.
Let’s be honest: AI is our tireless mentor, our infinitely patient tutor, our dear dumb friend, loyal slave to our every whim. But at the end of the day, it’s just a massive dataset, loosely structured and interrelated, queried through a natural language model.
The problem? It has some huge flaws:
-
The bias in the indexed and processed data
-
The nature of natural language itself
These two pillars, statistical bias and informal, easily misinterpreted language, make the “insights” produced by AI extremely slippery. And in the wrong hands, their consequences can scale very quickly.
Let me give you a real-world example.
When Everyone Is an “Expert” #
In a company with a hierarchical structure, department heads, presales teams, team leads, developers, etc. there has always been communication between experts to offer guidance where someone lacks domain expertise.
But here’s what we’re seeing more and more:
For instance, a pre-sales person (with enough technical knowledge to connect business and tech) decides to use AI to draft and expand the technical content of a contract.
At first, the document builds upon what they know, or think they know. But then, it moves into territory they don’t understand. AI convinces them they still understand. They believe they’re still in control, but the moment details come in, they’re completely lost. That was their limit, the precise boundary of their foundational knowledge. And it’s exactly at that point where the illusion of knowledge takes over.
The result? A sale is made for something they don’t know, or worse, think they know, whether it’s even feasible (in terms of time, resources, or technical complexity).
Let’s define this clearly:
-
Expert: someone who knows what they’re talking about.
-
“Expert”: someone who knows just enough, and leans on AI to cover what they don’t.
Now let’s keep going.
On the client side, another “expert” reviews the document, asks for a summary, and once again relies on AI. Once approved, another “expert” is tasked with assembling the delivery team. They review CVs, maybe with AI help again. CVs which, by the way, were created, polished, enhanced, or even fully generated by AI. So now we’ve got a “qualified” team of “experts”, chosen by… yes, other “experts”. Lovely.
What Could Possibly Go Wrong? #
At this point, we’ve got our stage set:
-
A project that probably isn’t feasible with the assigned resources
-
Approved by multiple committees
-
Given the green light
Now all the pressure falls on the developers. Two possible outcomes:
-
If the dev team is expert, they’ll realize the project is likely to fail, or will require serious compromises. They’ll probably end up using AI anyway, just to speed things up.
-
If the dev team is “expert”, they won’t see the risks or understand the implications. They’ll build it with AI. And we’ll all cross our fingers and hope for the best.
Do we work with Agile? Great. Let’s do some sprints. In the sprint review, the team presents what’s been built, and yet another group of “experts” uses AI to check whether it matches the presales spec.
We reach final delivery day. The product is shown to the client. Expectations vs. reality. Presales document vs. final product. What happened? What went wrong? Vive Coding.
Now go explain to whoever you want that the whole time… we were all in the Vive mood.
Conclusion #
AI is here to stay. But beyond the obvious, there’s something far harder to see: the silent displacement of our technical, intellectual, and critical responsibility.
This essay doesn’t seek to deny the benefits of AI, but to warn of its side effects when integrated uncritically into processes that scale, both personally and professionally.
Because the problem isn’t that AI lies to us, but that it tells us what we want to hear. The danger isn’t that AI gets things wrong. The real danger lies in when we believe it never does. And it’s precisely at that moment, when we let our guard down, that the self-deception of borrowed knowledge takes root.
This kind of collective illusion, shared by many who don’t know but think they do because AI reinforces them, creates the perfect setting for what Nassim Nicholas Taleb would call a Black Swan. [Book] The Black Swan: The Impact of the Highly Improbable
And what’s most concerning is not that AI might cause it, but that we’re the ones building the conditions for it to happen. Because the more “experts” think they’re experts, the weaker our collective judgment becomes. And the more we outsource our thinking, the more fragile the entire system gets.
If this text leaves you with anything, let it be this: the problem isn’t using AI, it’s forgetting to use your own head.