Last week I delivered a Webinar entitled, “How to Implement Code Reviews that Don’t Suck.” I focused my presentation on how to overcome common challenges to getting a successful code review process in place. This is an important topic, because as clear as the benefits are, there are often social, organizational, and education barriers to getting started, being productive, and achieving and measuring success.
If you are interested, you can view the On-Demand recording of the Webinar at the link above. The focus of this blog post is to answer some of the questions we had during the session. Below are answers to questions we didn’t have time to address during the live event.
Question: How do you encourage reviewers to submit a defect vs. a comment? We are finding that we get lots of comments from reviewers, but not so many defects, when in fact some of the comments, should be defects. Is this a phenomenon with bringing code review to organizations?
John: Yes, we’ve found a high sensitivity to submitting defects to be very common. On the other hand, we see some organizations where they submit more defects than comments. While we have never been able to articulate a specific cause for the difference, it’s obviously a factor of the organizational culture and the background and temperaments of the individual team members.
So, what to do about it? As a software company we aren’t in a position to directly change the attitudes and personal relationships that give rise to this. However, we have added some features into CodeCollaborator that can help make defect submission seem a little less ‘in your face.’
First, we recognize that terminology matters. So, we have made the defect terminology configurable. Our users can give our defects a friendlier name like “issue” or “finding” and reserve harsher designations for the defect tracking system. So, in your own process you might want to give some thought to what you choose to call these things.
Second, some team members feel more comfortable submitting a comment before submitting a defect, because they feel it’s easier to create the defect once there is some discussion and consensus that the issue needs to be fixed.
This is why every comment thread in CodeCollaborator can be used to create a defect. We are also considering a feature that allows the promotion of a comment to a defect.
Question: In most of the teams I have worked with, there is an imbalance of knowledge between various members. For example, one developer may be more familiar with the database than other engineers. If there is an imbalance of expertise in the team, how can code review be implemented effectively?
John: Although distributing knowledge of the code base is one long term benefit of code review, it’s a valid question about how to get started. Here are a few practices that can help.
1. Have the engineers who wrote the code annotate prior to sending it out for review.
2. Include the engineers who wrote the code as observers on the review.
3. Include engineers not familiar with the code as observers for a short period.
Annotation helps review authors inform other team members about aspects of the code that require supplementary explanation. The observer role allows you to include new team members and others unfamiliar with the code in the review process without making them responsible for helping complete it. In CodeCollaborator, the Observer is a built-in role designed specifically for this purpose.
It’s also key to select reviewers carefully. If you have areas of code that can only be reviewed by selected individuals, an automated review tool will allow you to set up reviewer subscriptions and notifications.
Question: Is a checklist recommended for the author, the reviewer, or both? Does CodeCollaborator have a facility to create one?
John: Checklists are typically designed for reviewers. Their primary intent is to help reviewers find issues that aren’t obvious in the code. Omissions and conformance to standards are types of issues that the code itself doesn’t disclose. That said, the checklists themselves can be written by reviewers, authors, or the collective team.
Depending on how a team wants to use them, checklists can be created in a number of different ways. Participant custom fields are one. There are fields that are filled out per review by each participant. Selecting the items covered during the review would be one simple way to have a checklist on the review summary. Another more informal method would be the addition of checklist information into the general comments for the review.