On Code Complexity


We humans think a lot about how we’re different from other species on this planet. Defining these differences strokes our ego because we’re brought up to bask in the glow of our “superiority” to everything else on Earth. One of these differences – we believe – involves complexity. The life of a modern human being certainly seems to be quite complex. And, feeling superior, we humans pride ourselves on being able to effectively deal with all that complexity.

Where does complexity come from?

Complexity, at its core, is about decisions – choices that we face and decisions that we need to make. We make thousands of decisions every day, frequently without even realizing it. Sometimes we get excited about all that complexity.

“You can’t believe how many many balls I juggle during the day!” we proudly proclaim. Somehow we think that makes us – well men anyway – feel like tough guys. All that activity keeps us busy and intellectually entertained. So much so that when we finally wind down – like on vacation in a quiet setting – we somehow feel like less of a person. There’s so much less to decide (assuming that you’ve planned the vacation well). It’s as if someone popped our overinflated balloon of life.

Yet, frequently, that down time provides us with some of our best moments. The pervasive noise of life subsides and we discover unexpected insight into things we hadn’t had time to reflect on until then. We were too busy making all those decisions.

So what does that have to do with code?

Code is the prose of software engineers. Engineers deal with complex systems on a daily basis – designing them, making them, repairing them, or explaining them. And all those systems are built with code.

Since the processes and algorithms that these software systems model are pretty complex, the underlying code used to write them ends up being complex as well. In general, this is not such a good thing, because complex code is usually harder to understand, thus it is more prone to bugs – particularly when someone other than the first author makes the changes. That’s not uncommon in the software world. While it is not a hard fact that the likelihood of bugs is linear with code complexity, it is generally accepted that functions that are too long and/or have high complexity are red flags.

Usually, software engineers with any experience can look at code and intuitively evaluate its level of complexity. Whether they can effectively deal with that complexity depends on their level of skill. As you can imagine, that’s not good enough for managers – they need measurement. Fortunately, there exists an exact way to measure code complexity  known as cyclomatic complexity – and its calculation is defined as McCabe complexity. As you might expect, the paper that introduced the McCabe complexity in 1976, was a little – shall we say – complex, specifically the way it outlined the calculation.

complexityAs the concept grew in popularity, a simpler calculation method was developed. All you need to know is the number of decisions built into your code. Sound familiar? Just as in our daily lives, the more decisions in a piece of code, the more complex it is. Decisions in the code are represented as IF conditionals, loops of all sorts, and exception blocks.

It’s not surprising that “straight line” code (code in which instructions are executed sequentially, without branching, looping or testing) has a complexity of “1.” That’s as low as it gets. Minimum decisions, minimum complexity. Logically then, as the number of decisions in a function grows, the complexity grows as well.

Using code complexity metrics to improve code quality

Complexity can be exciting, but life is generally better when it’s less complex. The same can be said for code. The best code is not the most complex (even though this may stroke the engineers’ ego). The best code has less complexity and highlights the art involved in writing code that is less complex. Less complex code is also easier to understand and far more likely to be of higher quality.

Unlike life, where we can get away to escape complexity, code never takes a vacation. The only way to avoid complexity there is to write it properly from the start. Code refactoring or new code implementation will benefit from code complexity statistics as well. A general downtrend in complexity will indicate better code quality. That’s where tools come in. They assist us in evaluating complexity and help us reduce it wherever possible.

When you measure existing code base for complexity, some high complexity areas will immediately stand out. These are the potentially problematic areas of your code. This is where you should apply a more intense code review, focused testing, and/or refactoring. Some of these areas of code may have been collecting dust for years and are just waiting for the right time to blow up.

As a matter of course, a good practice is to establish an acceptable code complexity threshold. Functions beyond that level of complexity need to be broken down before they are committed to the team repository. The threshold typically used in industry is 20-25 even though 10 is a good measure for life critical systems.

AQtime Pro by SmartBear, a Windows tool for profiling code, can analyze your binary code and report its cyclomatic complexity. So, once you’ve built your code (and you do build your code prior to check in, right?), you can run the complexity profiler tool on the code and get a measurement of how successful you’ve been at breaking the complexity down before it takes down your program and you along with it.

Fortunately, using AQtime Pro is not complex at all. And that will help you go on vacation when you planned to, leaving all that (hopefully less) complex code behind.

 See also:


  1. AlanPage says:

    It’s worth pointing out that McCabe’s cyclomatic complexity is ONE measure of code complexity – not THE measure. Halstead metrics, CK metrics (including fan-in, fan-out, etc.) – and many others provide the same sort of “smoke alarm” (may not be a fire, but you should look) code metrics.

  2. “The most effective way to reduce cyclomatic complexity is to pull out portions of code and place them into new methods. This pushes the complexity into smaller, more manageable (and therefore more testable) methods. Of course, you should then test those smaller methods.” quite from the IBM paper.

    This is often not a solution! you simply end up introducing quite a fragmentation.
    so before I had to look at 1 complex function now I have to look on N function even though not complex. see how you shapeshift one type of complexity into another! I’m sick of playing the game find my hidden code around 🙁 consider such a splendid feature as #regions in c# and here you go. it takes a century just to find each member of the dance group just to realise it is indeed a complex dance at the end :))))))

    Also about evaluate context first instead of blindly follow rules and fixed thresholds! e.g. max 25 CC allowed. so what if I have a function with a switch statement having 24 cases (or if-else chain of 23 ifs and 1 else), while all being one liners. although CC is high you must all agree that such methods are one of the easiest to maintain.

    p.s. context first development 😛 and you better think of change in design than cosmetic code shuffle

  3. I agree with ibm approach, many todays editors have click through functionality, with this bearing in mind, it is much more better to refactor it like that – separating into smaller methods, much more comprehendable and maintainable. I dont want me to discuss switch with 24 cases or such if else, then is something wrong with your design.

Speak Your Mind