We humans think a lot about how we’re different from other species on this planet. Defining these differences strokes our ego because we’re brought up to bask in the glow of our “superiority” to everything else on Earth. One of these differences – we believe – involves complexity. The life of a modern human being certainly seems to be quite complex. And, feeling superior, we humans pride ourselves on being able to effectively deal with all that complexity.
Where does complexity come from?
Complexity, at its core, is about decisions – choices that we face and decisions that we need to make. We make thousands of decisions every day, frequently without even realizing it. Sometimes we get excited about all that complexity.
“You can’t believe how many many balls I juggle during the day!” we proudly proclaim. Somehow we think that makes us – well men anyway – feel like tough guys. All that activity keeps us busy and intellectually entertained. So much so that when we finally wind down – like on vacation in a quiet setting – we somehow feel like less of a person. There’s so much less to decide (assuming that you’ve planned the vacation well). It’s as if someone popped our overinflated balloon of life.
Yet, frequently, that down time provides us with some of our best moments. The pervasive noise of life subsides and we discover unexpected insight into things we hadn’t had time to reflect on until then. We were too busy making all those decisions.
So what does that have to do with code?
Code is the prose of software engineers. Engineers deal with complex systems on a daily basis – designing them, making them, repairing them, or explaining them. And all those systems are built with code.
Since the processes and algorithms that these software systems model are pretty complex, the underlying code used to write them ends up being complex as well. In general, this is not such a good thing, because complex code is usually harder to understand, thus it is more prone to bugs – particularly when someone other than the first author makes the changes. That’s not uncommon in the software world. While it is not a hard fact that the likelihood of bugs is linear with code complexity, it is generally accepted that functions that are too long and/or have high complexity are red flags.
Usually, software engineers with any experience can look at code and intuitively evaluate its level of complexity. Whether they can effectively deal with that complexity depends on their level of skill. As you can imagine, that’s not good enough for managers – they need measurement. Fortunately, there exists an exact way to measure code complexity known as cyclomatic complexity – and its calculation is defined as McCabe complexity. As you might expect, the paper that introduced the McCabe complexity in 1976, was a little – shall we say – complex, specifically the way it outlined the calculation.
As the concept grew in popularity, a simpler calculation method was developed. All you need to know is the number of decisions built into your code. Sound familiar? Just as in our daily lives, the more decisions in a piece of code, the more complex it is. Decisions in the code are represented as IF conditionals, loops of all sorts, and exception blocks.
It’s not surprising that “straight line” code (code in which instructions are executed sequentially, without branching, looping or testing) has a complexity of “1.” That’s as low as it gets. Minimum decisions, minimum complexity. Logically then, as the number of decisions in a function grows, the complexity grows as well.
Using code complexity metrics to improve code quality
Complexity can be exciting, but life is generally better when it’s less complex. The same can be said for code. The best code is not the most complex (even though this may stroke the engineers’ ego). The best code has less complexity and highlights the art involved in writing code that is less complex. Less complex code is also easier to understand and far more likely to be of higher quality.
Unlike life, where we can get away to escape complexity, code never takes a vacation. The only way to avoid complexity there is to write it properly from the start. Code refactoring or new code implementation will benefit from code complexity statistics as well. A general downtrend in complexity will indicate better code quality. That’s where tools come in. They assist us in evaluating complexity and help us reduce it wherever possible.
When you measure existing code base for complexity, some high complexity areas will immediately stand out. These are the potentially problematic areas of your code. This is where you should apply a more intense code review, focused testing, and/or refactoring. Some of these areas of code may have been collecting dust for years and are just waiting for the right time to blow up.
As a matter of course, a good practice is to establish an acceptable code complexity threshold. Functions beyond that level of complexity need to be broken down before they are committed to the team repository. The threshold typically used in industry is 20-25 even though 10 is a good measure for life critical systems.
AQtime Pro by SmartBear, a Windows tool for profiling code, can analyze your binary code and report its cyclomatic complexity. So, once you’ve built your code (and you do build your code prior to check in, right?), you can run the complexity profiler tool on the code and get a measurement of how successful you’ve been at breaking the complexity down before it takes down your program and you along with it.
Fortunately, using AQtime Pro is not complex at all. And that will help you go on vacation when you planned to, leaving all that (hopefully less) complex code behind.
- Your Code May Work, But It Still Might Suck
- Fabrice Bellard: Portrait of a Super-Productive Programmer
- Using Peer Code Reviews as a Professional Development Tool