Exploring Programming Languages’ Science and Folklore

  December 12, 2013

Functional programming! Declarative! Object-oriented! Strong typing, static, dynamic! The programming community certainly has put a lot of energy into deciding techniques and methodologies by argument. But is there persuasive experimental evidence about what truly helps and what hurts? Ah, that’s a good question.

"Does anybody really know what time it is?" the band Chicago asked in its music-chart-leading way back in 1969. When it comes to rigorous knowledge about programming language productivity, the answer might well be, "No."

On one hand, developers have a rich lode of folk belief about programming practice: gotos make coders infertile, or at least unpopular; serious manly men write in C++ or Java and leave Basic and Ruby to women and children; C is obviously at least 10 times faster than Python; the more typing, comments, and modularization, the better; and so on. While these might sound like caricatures from polarized arguments, thousands of online forums and abundant personal experience within programming shops provide evidence that developers really do talk and believe this way. At least some of them, at least some of the time.

An associated belief is that these things matter. You’ve seen the undercurrents yourself. A mistaken selection between dynamic and static typing can doom a start-up otherwise destined for billion-dollar status, and a sufficiently-innovative use of Github branching capabilities will propel a vendor all the way to profitability.

But… are such beliefs based in fact? With so many clear choices leading to such dramatic consequences, programming science surely has yielded important definite conclusions by now about how to code, right? Well, no.

Scientific conclusions… for low values of “scientific”

Perhaps it astonishes you as much as it surprises me. We don't know – in any scientific way – which languages are best. We aren't entirely sure what coding expressions or styles should be avoided. And as an industry, we're still rather far from certainty that computer scientists are even addressing the questions that matter in commercial practice.

It's no particular disgrace that the juvenile field of programming – it is barely 200 years since Jacquard began to automate his textile looms— still has a lot to learn. Yet we behave as though the industry has everything figured out.

This level of ignorance becomes less shocking when compared to, say, medicine or business. These domains certainly are important, even gravely so. Yet the science behind the current practice of programming is largely either missing or profoundly flawed.

What do we know, then? Greg Wilson has a superbly accurate handle on the question, which he discusses in a video, What We Actually Know About Software Development, and Why We Believe It's True. His first definite result: Essentially all work on estimation of software projects is useless. He concluded that estimation returns to supervisors what engineers believe supervisors expect, perhaps with a bit of noise added to the signal.

Wilson's second definite conclusion is that high-level languages are high-level: Expressive languages produce measurably more reliable and measurably slower-performing programs. (There's much else in Wilson's hour-long video that makes it worth watching: the non-linearity of requirement bundles, the importance of analysis, the expense of finding bugs early, economic management of inspections, the necessity of small patches, the dominance of organizational over geographic geometry, and so on.)

Glenn Vanderburg has a different slant on the related topic of Craft, Engineering, and the Essence of Programming, also worth the 35 minutes it takes to view the video.

Wilson says scientific standards in software have improved in recent years. That's the one reservation I have about his presentation: I still see unsupported claims widely accepted. An example of what I mean by this is the widely-cited MapReduce: simplified data processing on large clusters, published in 2008 in the Communications of the ACM, certainly a prestigious outlet. The last sentence of the three-sentence abstract is, "Programmers find the system easy to use: more than ten thousand distinct MapReduce programs have been implemented internally at Google over the past four years, and an average of one hundred thousand MapReduce jobs are executed on Google's clusters every day, processing a total of more than twenty petabytes of data per day."

Consider from your own experience how strong is the correlation, let alone implication, between "easy to use" and intensity of use on serious hardware. This article is generally treated as a founding academic document for MapReduce and related subjects. It deserves that distinction, for it effectively communicates the MapReduce idea to scholarly audiences. The article does not, however, provide a strong scientific basis for the proposition that “Programmers find the system easy to use.”

One interesting conclusion from a survey of the field of scientific research into programming is how little is known about such traditional beliefs as the efficacy of functional languages for construction of parallelized programs, or the benefits of object orientation. To a large extent, the experiments required to judge such questions simply haven't been done. For the most part, only radically small, local results (and a lot of still-open questions) have been reached.

One typical outcome from research into programming languages is Linda McIver's 2000 write-up of The Effect of Programming Language on Error Rates of Novice Programmers. She found that beginners working in GRAIL made significantly fewer errors than those working with LOGO. To my knowledge, there haven't been enough detailed follow-ups to yield strong support for the importance of error diagnosis versus the presumed ease-of-learning of imperative languages, or any of the other dimensions McIver cites as possible sources of the difference. Although everyone who reads this paper can pick out a favorite explanation for why GRAIL trumps LOGO, we have only exceedingly weak scientific evidence for or against any of the reasons.

Without silver bullets, we're back to "execution"

What does all this mean to you in your day-to-day life as a working programmer, tester, or DevOp? First, it's an opportunity to simplify your life: There isn’t a large ROI from worrying about whether to use Haskell or PowerShell, or to elaborate project-estimation methodologies. You can minimize the energy you invest in such glorified territorial disputes. Feel free to make such decisions relatively quickly, perhaps based on compatibility with colleagues or organizational constraints; focus more attention on areas where your engineering judgment and effort are likely to make an impact.

One clue to success in real-world practice seems to involve coordination between multiple factors; Sarah Mei argues for object orientation combined with development methodology and, maybe most important, "good team communication." Enjoy your proficiency in the languages with which you're good, be clear about why you want to learn or switch to new ones, and make the most of the practices you know work well in your own situation.

See also:

[dfads params='groups=930&limit=1&orderby=random']

[dfads params='groups=937&limit=1&orderby=random']