Pros and Cons: Experiments with AI-Driven Development and Claude Code

I’ve been experimenting with how I use AI in my development workflow for the last few months. I’ve been using Cursor for a while now, but I finally decided to jump on the Claude Code hype train after hearing the powerful things people on my team were using it for. I’ve been resistant to AI in the past, but I went into Claude with an open mind and decided to try out AI-driven development for the first time.

My Experience with Cursor

My first experience with AI-assisted development was Cursor. I used it for smaller tasks such as writing tests and helping decide how to refactor things. Even for tasks that simple, I constantly fought with it to get it to write code the way I wanted. It would suggest patterns that didn’t match our codebase (or simply weren’t logical at all) or take an approach I’d have to undo and redo. It was frustrating and wasted more time than it saved.

I kept at it for a while, but the experience left me skeptical about how much AI tooling could actually help in a real development workflow.

Trying Claude Code End-to-End

Eventually, I downloaded Claude Code and decided to try something more ambitious. Instead of small tasks, I wanted to see if it could help me build an entire feature end-to-end.

Starting with a Plan

I spent time digging through my codebase with Cursor (before I installed Claude) and wrote up a detailed technical document that I felt pretty confident about. Then I gave that plan to Claude Code and asked it to break the work down into reasonable, bite-sized chunks, pausing between each one to let me review.

The results surprised me. Claude broke the implementation into logical pieces and wrote code that matched the patterns already in the codebase. Besides minor code style preferences, the code it produced was basically what I would have written myself. When I pointed it toward specific services whose style I preferred, it picked up on those patterns quickly.

Pushing Back Instead of Saying Yes

At one point, I suggested a different approach for part of the implementation. When prompting AI for an idea I have but am not 100% sure of, I always add something like, “What do you think of this approach?” or, “Do you think there is a better way to do this?” Cursor would usually ignore this prompt and go along with whatever I suggested with no thought. Claude actually pushed back and explained why its approach was better. The reasoning was detailed, made good points in favor of its own approach, and felt more like working with a seasoned teammate than an annoying yes-man.

The feature was semi-complex, and the code worked without issues. Much of the ease I had with this experiment came from starting with a detailed implementation plan (down to what functions I wanted to call, but this could’ve been overboard). Even so, I never had to argue with it back and forth the way I did with Cursor. It just wrote good, clean code, exactly how I would have done it myself.

The Part I Didn’t Expect

Here’s the thing I wasn’t prepared for: it was kind of boring.

Most of my time went to clicking through and approving code. Occasionally, I’d give input on style, but since the output was mostly right, I didn’t have much to say. I was reviewing instead of building, and the distinction matters more than I thought it would.

When the story was done, I didn’t feel any sense of accomplishment. Even on simple tasks, I normally enjoy the process of writing code and solving problems. That was gone. I was faster, sure, but faster at what? Approving someone else’s work?

What’s Next

I think there’s a better workflow out there, but I haven’t found it yet. Maybe the move is to split up the work — hand off the parts I don’t want to implement and write the interesting pieces myself. Or maybe I write the code and use Claude to review it. I’m not sure.

What I do know is that sitting there reviewing Claude’s output felt like a waste of time, even though it was objectively saving me time. That’s a strange tension, and a lot of developers are going to run into it as these tools get better.

I’m still figuring this out. But if you’re using AI tools and something feels off about the workflow, you’re probably not alone.

 
Conversation

Join the conversation

Your email address will not be published. Required fields are marked *