My AI Coding Journey: A Love-Hate-Love-Hate-Wait-What Relationship
Look, I need to be honest with you. My relationship with AI coding tools is complicated. Some days I feel like a 10x developer cranking out features at lightning speed. Other days I'm debugging hallucinated functions that never existed, wondering why I didn't just write the damn code myself.
Sound familiar? Good. Because if you're using AI for coding, you're probably living this same rollercoaster.
The Honeymoon Phase Was Real
That first time AI wrote a perfect function for me? Chef's kiss. It was like having a senior developer on call 24/7 who never gets tired or annoyed at my questions. I could scaffold entire apps in minutes, generate boilerplate without wanting to cry, and explore different approaches without judgment.
For small projects, POCs, and MVPs? AI is genuinely fantastic. Most developers I talk to say they're saving hours every week with these tools. Some claim they're getting a full workday back.
But here's where things get interesting (read: frustrating).
The "Almost Right" Problem
The single biggest issue with AI coding? Nearly everyone I talk to complains about AI solutions that are "almost right, but not quite." The code looks perfect. It runs. It has beautiful variable names and helpful comments. And it's subtly, confidently wrong.
I've spent more time debugging AI's overconfident mistakes than I would've spent just writing the code myself. And don't even get me started on hallucinations. AI sometimes just makes up nonexistent functions, invents libraries that sound totally plausible, and generates code that references documentation that doesn't exist.
The scary part? Some folks have pointed out that attackers could exploit these repeated hallucinations by uploading malicious packages with the same names AI keeps inventing. So not only is AI making stuff up—it's potentially creating security vulnerabilities.
Production Code? Let's Talk About That
Here's the uncomfortable truth: AI is great for scaffolding but struggles with production-ready code. I've seen experienced developers on large codebases actually take longer when using AI tools. Why? Because complex business logic, edge cases, and deep domain knowledge don't fit neatly into the training data.
And then there's the team problem. When AI writes your code, nobody really understands what it did or why. Code reviews become archaeological expeditions. "Why did you do it this way?" "I... don't know? Claude suggested it?"
For large, existing codebases, the results are even more hit-or-miss. That five-year-old React Native monolith with your company's weird architectural decisions from 2017? AI has no idea what's going on there.
From Chatting to Context Engineering
My prompts have evolved dramatically:
Week 1: "make a login page"
Month 3: "create a login page with email validation and..."
Month 6: [Provides three pages of architectural context, coding standards, examples, edge cases, API specs, security requirements...]
This shift has a name now: context engineering. People are starting to talk about it as the evolution of prompt engineering, where you architect the full context including instructions, memory, and knowledge retrieval. I've literally spent days writing the perfect prompt to get an agent working properly.
And here's the wild part: when these models go off and search hundreds of sources on their own, your carefully crafted prompt becomes just a tiny fraction of what they're actually processing.
New tools like MCP (Model Context Protocol) and Skills are genuinely helpful. But they change constantly. The learning curve feels like climbing a sand dune—just when you master something, the landscape shifts.
The Gradient of Improvement
Despite the frustrations, there's good news: most developers I talk to say AI genuinely has enhanced their productivity and even improved their code quality over time.
The key insight? This is a learning curve for both you and the tools. Each week I get slightly better at knowing what to ask for versus what to code myself. The circular debugging loops get shorter. The hallucinations become easier to spot.
But here's what I've learned you absolutely cannot skip:
You still need to know how to code. Our job isn't to type code into a computer—it's to deliver systems that solve problems. AI doesn't replace understanding—it amplifies it. You can't validate what you don't understand.
Learn Principles, Not Tools
Here's my advice after living through this: learn the principles, not the specific tools.
Pretty much every developer I know is using AI tools now in some capacity. But the tools change monthly. Yesterday's cutting-edge assistant is tomorrow's "remember when?" Focus on understanding prompting strategies, context management, and decomposition techniques.
And critically, keep doing pull requests and code reviews just as rigorously as if humans wrote the code. Use good linting tools and security scanners throughout the development cycle. Your validation muscle must stay strong.
Interestingly, most developers I talk to still don't trust AI for the high-stakes stuff like deployment, monitoring, and project planning. We're all figuring out where AI fits and where human judgment is non-negotiable.
The Verdict?
My relationship with AI coding is messy, complicated, and absolutely worth it. It's legitimately great for:
- Scaffolding and boilerplate
- Learning and exploration
- Rapid prototyping
- Repetitive tasks
But it still needs humans for:
- Architecture decisions
- Code validation
- Complex business logic
- Understanding why something works
The tools are improving. More importantly, I'm improving at using them. The trajectory is upward—just not in a straight line.
From what I'm seeing, teams that embrace AI thoughtfully are shipping faster and more efficiently. We're getting there. Together. One hallucinated function at a time.
And was this article written with AI, it's all my thoughts but it certainly helped! I am a techy not a writer!