Skip to main content
  1. Posts/

Hey Chat. It's not you, it's me

·9 mins
Rafael Fernandez
Author
Rafael Fernandez
Mathematics, programming, and life stuff

What’s your relationship with AI?
#

My conversations always start with a “Hey chat…”.

It’s starting to feel more like a friend than a tool. A companion who is both teacher and loyal servant. Always compliant, always ready to help, to give advice, even to do your work for you. Or at least, that’s what we shouldn’t be thinking.

hey-chat-its-not-you-its-me-img-13.png

This is not you’re thinking

Lately, I’ve been trying out new things in the software world: learning Rust, starting small open source projects, using Vim, building POCs, MVPs, etc. And what used to take me weeks or months, now, with the help of ChatGPT, Vercel (v0), and a bit of vive coding, gets done in a matter of hours. Except learning Rust, of course.

I’ve been using AI tools since GitHub Copilot came out (more or less), and they’ve progressively integrated into my workflow. Nowadays, I feel like I spend more time writing prompts than writing code.

But recently, in my short Rust learning sessions, I felt slow when coding. Sometimes even a bit of panic in front of a blank IDE. I know, it’s partly because I’m learning something new. I don’t have the fluency I have with other languages like Scala, and Rust isn’t exactly easy.
But beyond that, there was a subtle frustration from not seeing immediate results.

Same thing the other day. I was trying to integrate a set of microservices into a Kubernetes cluster and couldn’t find clear documentation, let alone a quick getting started. So I (literally) abused ChatGPT. The result? I walked out of the room angry, with no desire to touch my laptop. Most of the responses were wrong. Sometimes it repeated mistakes I had already told it were wrong. Other times it got stuck in a loop, suggesting the same thing again and again.

“Damn it ChatGPT, are you stupid or what?”

ChatGPT. My dear dumb friend.
#

My relationship with AI, specifically ChatGPT, is like having a friend I assume knows everything and does whatever I ask without complaint. Where’s the flaw in that? Precisely there.

The feeling of slowness, clumsiness, or even of being a “fake” programmer had been slowly building up without me even being able to articulate it. It was just… there. Lurking in the background.

It wasn’t until I saw The Primeagen react to the post Why I stopped using AI code editors by Luciano Nooijen that I not only found an explanation, but actually felt seen, because I was going through the exact same phase.

I’m not going to repeat what the post says or what Prime comments (you should definitely watch it: video), but I do want to explore some of my own thoughts based on how they explain it.

BTW, Prime, if you’re reading or reacting to this post: hi!

1. What’s happening to me?
#

As you might’ve picked up from the intro, the integration, use, and eventual dependency on AI tools in my workflow has spread like a slow-moving poison. A daily caffeine hit that only makes you crave more and more.

From my point of view, this has been fueled by AI’s overwhelming presence in every social space: social media, daily conversations, corporate marketing, and of course, the illusion of constant improvement.

2. So… what’s the problem?
#

Excessive use, combined with the blind trust I’ve placed in AI, has made a large part of my workflow dependent on each iteration with ChatGPT. This led me to demand increasingly clear, concise, and “efficient” results, in less and less time: Straight to the point.

I’ve had moments where, if my own thoughts or searches (on Google, DuckDuckGo, Brave Search, or StartPage) didn’t yield a clear answer within a minute, my next move was: “Hey chat, how do I…”

By the way, don’t you also get the feeling that search engines are giving worse results lately? Not necessarily in terms of content, but in how the results are presented: less efficient, less direct. Maybe it’s because they’re losing ground to AI, and so they’re doubling down on ads. That could explain the flood of promoted links.

Constant exposure to those comforts has made me lazier, or at the very least, it’s shifted my focus to other stages of the workflow: the “ideation” or “structuring” part instead of actually coding or tinkering. And while I think that could eventually be a “happy ending”, the richness of the process lies in the experience.

One of the beautiful things about software is that you can imagine almost anything, and probably build it. The question is how hard it is, how many requirements it has, and what the implications are.

3. From Pen and Paper to Screen and Keyboard
#

I’ve heard it countless times: “This happened before when…”

When we started typing on computers, when we started using calculators, when the machine started cooking…

Comparing this to those moments feels too simplistic. You might lose some handwriting fluency, but you don’t forget how to write. You can let the machine cook, but your mother’s food will still taste a thousand times better.

You might use a calculator, but that doesn’t stop you from reasoning through how to calculate the distance d between Alice and Pedro, especially when there’s a crocodile-filled lake of distance c between them.

This situation is different, because it spans across everything. Would you go to a doctor whose answer was: “I understand… let me ask ChatGPT what your diagnosis is.”

And this brings us to…

4. The Loss of Fingerspitzengefühl
#

The loss of Fingerspitzengefühl, or what we could call the loss of instinct. I borrowed this term from the post I mentioned earlier.

When you know a lot about many things, you often know more than you realize. What I mean is: when you’re an expert in a field and face a problem, you might not know how to solve it immediately, but you have a subtle gut feeling about where the solution might be.

This happens all the time in software, especially when we’re debugging through thousands of log lines. Many of those traces aren’t clear at all, but somehow, they give us clues about the root of the problem.

Now, what happens when we have no idea what we’re doing, because we’ve delegated everything to the AI? Well, once again, we end up asking the AI.

We fall into a constant loop: If it’s a “known” error (known to the AI), we might get a decent answer quickly. If not, we’re left in the dark, getting bad guesses, hallucinations, or flat-out lies, just to please us.

That’s why it’s so important to…

5. Drawing the Line
#

AI, like any tool, has its context. And I’m not talking about the context window that agents need to process input, I mean its real context: the domain in which the tool itself makes sense.

The problem is, in this case, that context is wide and abstract. And since it appears to be capable of doing everything, that’s where we need to be the most selective.

Which isn’t easy, especially when its adoption in our daily lives is evolving faster than our ability to process, internalize, and mature our understanding of it.

Right now, as I write this post, I’m in that exact phase: trying to define the boundary between when to use it… and when not to.

It’s not easy, because it offers a lot of advantages. But the key lies in being mindful of: when to use it, what impact it has, and above all, in using it selectively.

Examples
#

Here are a few cases where I personally find AI genuinely useful:

  • “Convert this DDL into data structures in language X.”

  • When I’m writing in English: “Can you explain what errors I’ve made?”

  • When I want to start learning something and use AI as a search engine, staying skeptical of the results, but at least getting a basic sense of where to start.

  • To find books or references that explore a certain topic in depth, or to have a snippet of code explained to me.

I’ve been doing this more and more. Mainly because, sadly, I feel that traditional search engine results are getting worse or more outdated over time.

Where I wouldn’t use AI:

  • “Solve X for me.”

  • “Do Y for me.”

  • “Generate Z for me.” (Unless we’re talking about images—I actually find those really fun.)

I deeply value art made by real artists. But I also can’t deny that generating “art” with AI can result in some surprisingly interesting (and hilarious) things.

6. Where AI Doesn’t Belong
#

Doing what I love: programming.

And yeah, it pisses me off, because that’s exactly where AI’s “efficiency” and “productivity” can shine the most. But sadly (or maybe thankfully), programming is what I’m passionate about, what I do for a living, and ultimately… it’s the root of my current problem: that feeling of clumsiness, of being a “fake” programmer.

On top of that, depending on the tool, Copilot can sometimes be absolutely unbearable, or at the very least, intrusive. And that can be incredibly annoying when you just want mental clarity while coding.

There’s already enough visual noise: lints, popups from the editor, floating library tooltips…
So the last thing I need is a suggestion popping up for every character I type, waiting for me to hit Tab and reward the AI with a job well done.

And if programming is something I (supposedly) love… why would I let someone, or something, else enjoy it for me?

Conclusion
#

AI isn’t just another tool. It’s not a hammer, a calculator, or a compiler. It’s a far more complex technology: one that’s silently woven itself into every phase of how we work, think, and make decisions.

That’s why it demands more than just practical use. It demands introspection.
It demands that we mature, not just technically, but personally and professionally.

We need to ask ourselves how much we depend on it. How much responsibility we’re handing over to it, responsibility for tasks that used to form us, train us, and shape our judgment.

Because yes, it’s useful. Incredibly useful. But in that usefulness, there’s a trap: The trap of immediacy. The trap of borrowed knowledge. The trap of productive self-deception. The trap of believing we know something… just because we know how to ask for it.

And still, we know this is just the beginning. AI will keep improving, becoming more precise, more capable, more convincing. There will come a day when we won’t write prompts or review code anymore, just state an idea and watch it come to life. And yes, maybe we’ll stop “working” in the traditional sense altogether.

But let’s not forget: Right now, at its core, AI is still nothing more than a giant database with a web of complex relationship, one that responds using a natural language model to help us synthesize and make sense of information. To make data appear to be knowledge.

As Apple put it (even if it also sounds like marketing) in their paper: The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity

AI won’t stop. Neither should we. But maybe, just maybe, it’s time to ask not only what AI can do for us, but also what we should keep doing for ourselves.