Here’s What I’m Learning from a New Policy on AI Code Review

My client team recently updated its code review policy: an approval from an AI tool is now all that’s needed to merge. In this post, I’ll reflect on this change and what it means for the way our team works together.

My Personal Relationship with Code Review

Reviewing PRs is one of my very favorite parts of my job. It’s a great way to learn more about parts of the system that are less familiar to you and to get introduced to new patterns. PR comments represent an opportunity to practice articulating your own technical preferences and learning about others’ perspectives. I also enjoy getting to hold a lot of moving pieces in my head and think about how they all work together.

All this to say, I feel a bit of sadness about losing this part of my work, and even more apprehension that less code review will mean less learning for myself and across the entire team.

How the Team Reacted

The announcement of this change was met with a lot of varied opinions. On the more optimistic end, there’s excitement about the velocity gains that this will enable. If an AI tool can remove the bottleneck of waiting for a team member to review PRs, developers can push changes faster than before.

On the other hand, there are a lot of concerns about code quality, and fear that we’re inviting AI slop into our codebase unchecked. Additionally, code review is where requirements get validated, so there are concerns about requirements slipping through the cracks if team members no longer have to check out a branch and run the code to validate the changes.

The Norms Our Team Agreed On

Rather than treating this as an all-or-nothing change, our team quickly aligned on a few norms to guide how we’ll think about code review going forward.

Team members should continue to be notified of all PRs. Even if we’re not reviewing the actual code, it’s important that we have visibility into the kinds of changes being merged. We should at least give each other the opportunity to review code changes that may be of interest.

Model and API changes always get a human review. These are the most consequential changes, and the ones that are hardest to change course on if/when we get them wrong. They deserve an intentional review from the team.

For non-trivial logical changes, it’s up to the author. Not every PR carries the same level of risk. For changes that are more about the application logic, we’ll leave it up to the author to decide whether they want to wait for a human review. The assumption is that changes of this nature will be easier to course-correct on when something falls through.

Investment in automated test coverage is needed. We have some gaps in our automated test coverage, and our end-to-end test suite has been too flaky to truly rely on. This concern was already worth addressing. Now with human reviews being optional, the safety net that automated tests provide becomes even more important.

Too Early to Tell

It’s worth noting that this is a very new change. Less than a week has passed since the change was announced. While I personally find myself in the camp that’s more concerned than excited about this change, I’m trying to keep an open mind.

Conversation

Join the conversation

Your email address will not be published. Required fields are marked *