How peer code review fixes the “false-positive” problem with static analysis tools

If you're starting a new project, do yourself a favor and get a good static analysis tool in your standard build system.  It will prevent all sorts of little bugs and helps the code base stay sane.

But if you've installed a static analysis tool on an existing code base, it's a different story.  Suddenly you're presented with (tens of?) thousands of errors, and on closer inspection you realize that a lot of them are… false-positives!

What do you do with that?  Surely you can't pause development for months while you clean this up.  Surely it's not worth changing thousands of lines of code just to eliminate false-positive results from some tool!

Some tools have an answer for this.  For example, you can benchmark the number of errors you have today, and then just enforce that over time – you add no new errors, and hopefully over time you work in some fixes to reduce the number.

That's not bad.  Here's another way.

Since only a human can decide which (alleged) errors are (a) true and (b) worth fixing, and since peer code review complements static analysis, why not combine them?

Specifically: Take the output of static analysis as an input to reviewing code.  So "errors" become notes that inform a human – who can then decide on how strict the enforcement should be.

With a code review tool like Code Collaborator, you can write a script that takes the output of a static analysis tool and puts the warnings into the code review on the right lines of the right files, right there when the human being is looking at it.

Easy!  So now you can handle a bunch of warnings in a sensible way.

Speak Your Mind

*