Can static code analysis replace peer code review?

Whenever I talk about peer code review, someone always wants to pit static code analysis tools against human review.  Which is better?  They assume that because we sell a peer review tool we necessarily have to hate automation.

But that's just not true.  You need both.

Want to simultaneously waste developers' time and give them trivial busywork?  Have them hunt around for local variables that aren't all lowercase and find instances where you've overloaded equals() but not hashCode().

Anything you can automate… should be automated!  Of course.

But there are certain questions that static analysis can never answer.  Like:

  1. Does the code work as documented?
  2. Are the unit tests correct?
  3. Can another developer look at this code and be able to use or maintain it?
  4. Is this a good algorithm?
  5. Is this good code organization?

Items 1-3 especially are important.  Really important, if you care about code quality.  The only way to answer these questions is through peer code review.

Static analysis is like the spell-checker in Word.  Of course you should use it, and of course you should clear spelling errors before handing your document to a friend to edit.  But it's that human edit that finds the problems, checks for correctness, and can identify the sentences that are "weird."

Spell Czech is good, butt knot enough.  It takes a human to find the important problems.

So definitely run that static code analysis!  Just don't expect that to magically make your code correct or maintainable.

Speak Your Mind

*