The current focus on building everything as a reusable component is a smokescreen that results in software that is larger, slower, and more bug-ridden than software built with older methods, argues Jim Mischel. He attacks the ridiculousness of the “Single Responsibility Principle,” the often silly reliance on superficial unit tests, and many other sacred cows that the current crop of snake oil salesmen are peddling.
I’m in the business of building software that people use. Sometimes I’m my only client, and other times my work is seen by thousands or millions of people. More importantly, I get paid for producing working programs. My clients don’t particularly care what indentation style I use, how I model internal data structures, or if I adhered strictly to the SOLID concept. Most often, my clients want a minimal viable product that we can then subject to successive refinement until we reach some definition of “done.”
It’s a balancing act, trying to write good code that works today and can be changed tomorrow with minimal effort. Over the years, I have adopted a pragmatic, rather than dogmatic, approach to software construction, which often leads to spirited discussions with team leaders who have wholeheartedly embraced The Next New Thing.
A little Learning is a dang’rous Thing;
Drink deep, or taste not the Pierian Spring:
There shallow Draughts intoxicate the Brain,
And drinking largely sobers us again.—Alexander Pope, An Essay on Criticism
Some years ago, I was hired to help build the new version of a highly successful computer game. The company had spent more than a million dollars on a sequel, and then scrapped it. Our team was to start over completely, not using any of the game design or program code from the failed attempt. Nobody who was on the previous team would shed light on why they failed, and management actively discouraged discussion of the matter. But I was curious.
It took me several days to track down the old source code and technical design documents. It took me about five minutes to understand why the project failed. It’s been 15 years, and I still don’t understand how anybody thought that the design would work. That the company management let it go on long enough to waste a million dollars indicates that the team leader was really good at pulling the wool over management’s eyes, or management simply wasn’t paying attention. I never met the former team leader, but my experience with the project manager indicates that both of those scenarios are possible.
The code was written in C++, which was common enough in the games industry back then. Unfortunately, the lead programmer was enamored with the language and, from the looks of the code, he thought it was his duty to use every feature of the language at least once, regardless of whether he fully understood the ramifications of doing so or if the feature was appropriate for the selected job. In particular, the primary data structure was relational rather than hierarchical, and relations were expressed using multiple inheritance. When I saw one class inheriting from 19 different base classes, I thought I had read the code wrong. Whoever designed this code apparently hadn’t heard of the diamond problem, or thought that making it worse was the solution.
The source code for the game was so riddled with inappropriate applications of C++ language features that it led me to write a rant I titled Syntaxophilia. Since that time, I’ve encountered many other projects that have struggled or failed in large part due to the inappropriate use of language features, design patterns, and other technologies.
What alarms me is that it appears to be getting worse. As advances in software design become better known, more programmers are trying to use those advanced methods – without fully understanding what they’re getting themselves and their teams into.
In all too many instances, the people involved in the products’ technical design have lost sight of the real objective: creating working software. They focus, instead, on the form rather than the substance.
I see this happening with increasing frequency. More distressing is that when confronted with their design shortcomings, those same designers ignore the most important issue—that their programs don’t work—while defending the use of whatever design patterns or development methodologies they employed.
The book Design Patterns was an important contribution to the craft of building software. It identified two-dozen classic software design patterns, explaining how to identify common models and how to apply the pattern that is applicable to that model. Every software developer should have a copy of that book on his shelf and should be familiar with the patterns described therein.
But those developers also should understand that the idea is to identify a common model and apply the appropriate pattern. Too many developers become enthralled with a particular pattern and try to model all problems to fit that pattern. The result is as effective as driving a nail with a torque wrench. The most important lesson to be learned from Design Patterns is not how to implement a particular pattern—I can teach that to any marginally bright programmer—but rather when to do so.
I began programming shortly before the Structured Programming purists were out in full force. I found that they had some good ideas, but their dogma was ridiculous. For example, their admonition that every block of code should have exactly one entry point and one exit point was completely idiotic, and their refusal to admit the usefulness of the goto construct in any case bordered on the moronic. Any programmer writing working real-world code has had to violate both of those principles.
The move from Structured Programming to Object-Oriented Programming resulted in similar, but much more, hype and ridiculousness. In particular, the Single Responsibility Principle (SRP) and the Dependency Inversion Principle (DIP), if applied strictly, make software harder to build and more difficult to understand. Adhering to those two principles does make it easier and less risky to modify code, provided that you can wade through the extra layers and make the modifications in the proper places.
My primary problem with the SRP and the DIP is that very often one doesn’t have a clear idea of what a class will do before it’s implemented, and the “one thing” that the class was supposed to do is actually three or four different things. Adhering to these two principles then becomes like the old computer science class exercise in which one is supposed to write a flowchart and then build a program to implement it. Almost invariably, students (including me) would write the program and then build the flowchart. When working with SRP and DIP, we tend to build something that works and then refactor so that it adheres to the SOLID principle. I distrust any development principle that requires me to “go back and fix it.”
Another problem with DIP is that for most classes it demands extra work. Systems that strictly adhere to the DIP end up building abstract interfaces for everything—even those classes that most likely will not have more than one implementation. Any time the implementation changes, one must also change the abstract interface. This incurs unnecessary costs during implementation, during modification, and also when debugging. I grant that I incur some small amount of technical debt when I create a concrete class without a corresponding abstract interface. If I ever find that I need that abstraction, I have to go back and add it. But most often I don’t need the abstraction.
There are those who say that by failing to build interfaces, I’m taking on technical debt. They’re right. But technical debt isn’t necessarily bad. In this case, I’ve “borrowed” the time and effort it would take to create those interfaces, knowing full well that I might have to repay it at some point. But in most cases I never have to repay the loan! Better still, there’s rarely any cost associated with failing to make everything an interface. I have, in essence, taken an interest-free “loan” that I never have to repay.
Even with advanced debugging tools, it’s often difficult to identify a concrete implementation when looking at an interface. During code review, when I’m just looking at code without the benefit of advanced tools, it’s nearly impossible to keep track of which concrete implementation is being referenced.
Unit testing is one of the few modern practices that comes close to living up to its hype. Unit testing can make your code more reliable, and speed the development of your project. That seems counter-intuitive, because writing and maintaining good unit tests is hard. It takes precious time and effort—time and effort that would seemingly be better used on other things. But good unit tests really do ensure better components, meaning that integration is less of a challenge because the individual units are known good.
However, you can’t make a half-hearted attempt at unit testing. Poor unit tests are worse than no unit tests at all. Programmers enjoy writing unit tests almost as much as they enjoy documenting their own code, and they’re almost as good at it. This is changing, but slowly. In addition, programmers are notoriously bad at testing their own code. Good unit testing will improve your code, but instituting good unit testing requires a strong hand to start, close supervision, much education to show programmers how to write good unit tests, and some serious attitude adjustment for many programmers.
I’ve seen projects fail for any number of reasons, but never because the programmers made something work. I’ve seen—heck, I’ve written—working code that would fail every modern metric of “good code.” And yet, it works, the product is successful, and the development team is busy working away on version 2.
On the other hand, I’ve seen many projects fail because the development team was more concerned with “doing things right” than with making things work. One memorable project had the most beautiful code one could imagine. By modern measures, it was perfect. It didn’t do anything useful, but it was a work of art. The last I heard, that team had been disbanded and all the developers were looking for new jobs.
For fools rush in where angels fear to tread.—Alexander Pope, An Essay on Criticism
Earlier this year I started working on a social commerce project for a major computer manufacturer. Although the application’s premise was unique, the implementation was, or should have been, pretty typical: a Web front end and a database back end with minimal “glue” in the middle.
But whoever designed the application had apparently spent more time reading about design patterns than he did actually writing working computer programs. The result was an over-designed and incomprehensible mess that paid lip service to every conceivable design pattern and coding technique without regard to their applicability or proper use. My recommendation after seeing this abomination was to scrap the project—which they did, about a month after they terminated my contract.
A primary consideration behind the development of this project was scalability. The Powers That Be had decreed that the system be scalable from the start. They were operating under the misguided belief that the company’s reputation would ensure that the project would be an instant hit with millions of users flocking to the site immediately after it became public. Somehow, management convinced the development team of this, as well (or they were not politically motivated to argue: Who can tell the bosses that their confidence is misplaced?).
That focus on scalability, in my opinion, doomed the project from the start. The project was doing something that nobody had ever tried before. Nobody knew exactly what they were trying to build or had anything but a vague idea of how it would work. In that environment, the ability to move quickly is paramount. And yet the scalability requirement necessitated a huge amount of infrastructure be created before anything else could be done. Once built, that infrastructure is difficult to change. Unless the infrastructure is rock solid reliable and nearly invisible to the software that depends on it, its mere existence makes quickly changing directions impossible.
It's possible to build such infrastructure, but doing so requires careful planning and an in-depth understanding of the technologies, principles, and methodologies to be used. One doesn't simply slap together a Service Oriented Architecture (SOA), cobble together a CQRS model, and implement eventual consistency haphazardly. Including any of these technologies in your project, when appropriate, can greatly increase your chances of success. But any one of these techniques applied incorrectly or when it's not required will almost guarantee that your project fails.
Not only does this infrastructure take a lot of time to implement, it makes higher demands on the code that uses it. Programmers writing code that sits on top of such infrastructure have to understand more about how the underlying systems work, and have to write more code to interface with it than with more traditional infrastructure. Everything is harder, takes longer, and requires more care, more monitoring, and more formal process to make sure that everything is up and working.
The team had paid lip service to the formal process and monitoring, although the procedures put in place weren't effective. In addition, the developers installed a lightweight enterprise service bus in lieu of a full-blown SOA, implemented a fundamentally broken CQRS data model, and an eventual consistency model that was perpetually inconsistent and often just plain wrong.
All of this, of course, was wrapped up in every software development buzzword imaginable, and defended with almost religious fervor whenever somebody mentioned that it just didn't work. When I asked about this behavior, a co-worker said, “Telling the design team that the design is broken and can't possibly work is like calling their baby ugly. And no parent likes to hear that.”
A complex system that works is invariably found to have evolved from a simple system that works.
A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over, beginning with a working simple system.
John Gall, Systemantics
John Gall's book is hardly a scholarly treatise, but it contains a whole lot more truth than many of the software development methodologies I've seen. Complex systems are incredibly difficult to build. Just making something work is hard enough. Anything that adds complexity without providing a compelling positive benefit should be eliminated. That includes things that purists hold sacred.
A sign over my white board used to read, “If you don't have the time to do it right, where will you find the time to do it over?” That sentiment only makes sense if you choose the correct definition of “right.” “Perfect,” I've found, is the enemy of “good enough,” and most often good enough is … good enough. If the project works and is completed on time and on budget, then it's right, regardless of whether I pay homage to the buzzword du jour.
Design patterns, development methodologies, programming languages, advanced technologies, and off-the-shelf software packages are tools. As programmers and designers, we must have the tools, understand how to use them, the cost of using them, their alternatives, and, most importantly, when not to use a particular tool.
Applied correctly, the proper tool can make a difficult job much easier. Applying any tool incorrectly or, worse, cobbling together a substandard replacement tool will destroy a project as surely as driving a screw into a board with a fist-sized rock will destroy a solid cherry bookcase.
Jim Mischel started his career as a computer hobbyist in the late 1970s. Since then he has written COBOL banking applications, low-level microcontroller code, games, compilers, Web applications, and a distributed high performance Web crawler. The former host of the .NET Reference Guide, Jim now focuses on custom design and development. When not bashing the keyboard, you'll find him putting in the miles on his bicycle or indulging his passion for wood carving.