AI is Making You A Shallow Developer
Key Takeaways
AI is a tool for the informed: It only works if you know what you’re trying to do and how to ask for it.
Beware of “Cognitive Debt”: Relying on AI offloads the mental effort, which weakens your neural engagement and makes it harder to remember or explain your own code.
The Expertise Barrier: You need a solid foundation to filter out AI’s mistakes; otherwise, it’s just a time-sink.
Stay in the driver’s seat: To stay sharp, you must remain a creator, not just an observer. Limit AI to the repetitive “grunt work” and keep the core logic for your own brain.
If you’re not creating, you’re not really learning—and you’re not really living the craft. You’re essentially daydreaming.
The experience of coding with AI is hard to describe, but if I had to use one word, it’s shallow. By contrast, when you are actually writing code, you are fully engaged. You’re constantly learning from failed tests, type-check errors, and integration headaches. When you rely too much on AI, you become detached from that process. Even though you’re still sitting at your desk looking at the monitor, you aren’t really in the work anymore.
The Loss of Joy
I’ve started to feel like AI is taking away the best parts of programming. I’m talking about the interesting work: troubleshooting a production incident like a detective, verifying a quick idea with a console.log, or tweaking CSS box shadows with live reload to get a mockup just right. That fun is disappearing.
Ironically, on average, I’m more “productive” than ever. I can easily write a thousand lines of code, delete them, and write another thousand in a single morning. But I never step into “the zone” anymore. I don’t leave my desk feeling energized or proud of what I built.
AI might not replace you as a developer directly, but it might steal the joy that made you want to be a developer in the first place. And once the work becomes boring, you’ll eventually want to leave the industry altogether.
The Illusion of Expertise
At first glance, LLMs tuned for coding are fun. They are fast, they adapt to your codebase, and they mimic your patterns perfectly. The AI acts like a domain expert, generating code 100 times faster than any human colleague. It turns mockups into components that follow your ESLint rules, writes the tests, and sets up the Storybooks. You just provide feedback and approve the PR.
On the surface, it looks right. But after a few weeks, the patterns you’ve known for years start to feel alien. You look at the code and don’t fully understand why a function exists or why a check is done a certain way.
When you ask the AI for help, the original reasoning is often gone from its “short-term memory” (the session). The logic becomes “magic”—like a contractor who wrote thousands of lines of code and quit without a handoff. Except now, that happens every single minute.
The Accumulation of “Cognitive Debt”
I call this shallow work, and it turns out there is actual science behind this feeling. A recent study from the MIT Media Lab examined how using AI assistants leads to what they call “Cognitive Debt.” Using EEG brain scans, researchers found that people using AI assistants showed significantly weaker neural connectivity in areas tied to deep thinking and memory. Most shockingly, 83% of AI users were unable to quote from the work they had just produced. They had the “output,” but they didn’t have the understanding.
This is exactly what happens in a coding session. You are only observing the surface, which means mistakes easily slip into production. If the AI writes the code and the test, it might pass the test by simply walking around the fundamental issue. If you aren’t hands-on, your brain effectively “offloads” the logic, and you lose your grip on the project.
The Learning Trap
I call this shallow work. You are only observing the surface, which means mistakes easily slip into production. If the AI writes the code and the test, it might pass the test by simply walking around the fundamental issue. If you aren’t hands-on, you won’t catch it.
Real learning isn’t watching someone do something right. It’s the opposite: it’s you trying things that don’t work until you find the way that does. With AI, you just ask it to fix the “shitty things” until you see a good result. You tell yourself, “I’m satisfied; it probably wrote what I would have written.”
But that’s a lie. You learned nothing. You’ve just become an agent that translates business requirements into prompts. It raises the question: Are you still a programmer? Or just a translator?



