The Hubris Of Future Proofing

All software development involves within it a tension – we do not have unlimited resources to spend on a project, and as such we must carefully choose our battles and invest our time where it is most valuable. There is always, always, *always* more work than time.

Donald Knuth, one of the pioneers of software engineering as a discipline, once said ‘We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil’. Contained within that small sentence is a world of wisdom that is often ignored, and its lesson can be abstracted even further into the realms of ‘future proofing’.

First of all, let’s talk about the statement itself – why is it a bad idea to optimize *before* there’s a problem? Surely that makes sense – after all, don’t software engineers bang on about how important it is to catch problems early, before too much has been built around the code?

The problem of premature optimisation is twofold – one is that optimisation usually (in real world cases) doesn’t involve simply replacing slow code for fast code. Mostly it involves refactoring code so that it stresses efficiency, and that has a consequence for the readability and maintainability of code. Optimised code can be tremendously obscure once you’re trying to bleed every last cycle of performance out of an algorithm because the fastest way to do things is hardly ever the most readable way to do things. The most memory efficient representation is almost never the most maintainable representation.

To give an example of this, let’s imagine a program where there is a need for eight boolean values to be stored – perhaps they represent the state of physical switches, or software options, or whatever – it doesn’t matter. Now, we can store that as an array of eight booleans, which is readable… or we can use the individual bits inside an 8-bit integer, which is not. That probably sounds like a ridiculous example, but I’ve seen it done.

Readability and maintainability are the first causalities in a quest for optimisation[1], and they are the single most important elements of software quality when it comes to writing a system. Software systems change *a lot* during development as the problem space is explored and the external context shifts. Your code has to be flexible enough to roll with the punches because the punches are coming. Suddenly, your design decision that these eight switches are handled in an 8-bit integer becomes a Big Deal when the requirement changes to ‘Oh, we’re moving to a 1-100 scale for these values, rather than a simple yes/no’. Optimised solutions, which are most often unique to a particular circumstance, are too inflexible to easily change.

Secondly, it’s usually wasted effort.

I don’t recall if I have written about the Pareto Principle before, but I am sure I have – it’s basically the ‘name’ for the 80/20 rule. An informal heuristic of software development is that ‘80% of the processing time is spent in 20% of the code’. Often, you won’t even know when developing what 20% of the code that will be – it may turn out that an obscure feature you added as an afterthought turns out to be hugely valuable to users. It may turn out that that the real world scaling of a function isn’t good enough to deal with actual volume. It may be that a circumstance that you thought would be unusual turns out to be the norm. You might be able to make some educated guesses as to what are likely to be costly functions, but it’s only once you throw your system out into the field to be battle-tested that you’ll know for sure.

The larger a system is, the harder it is to guess what the 20% of the code is going to be. Effort spent optimising the 80% is all but wasted.

Let’s look at an example… let’s say that it takes 1000000 CPU cycles to accomplish an action, and that action is occurring a thousand times per second. You look at these figures and go ‘eesh, we need to cut that down’. If you optimise the 20% by 50%, then you’ll turn the 800,000 cycles spent in that code into 400,000. If you optimise the 80%, you’ll turn the 200,000 cycles spent into that code into 100,000. In the former case, you bring the total down to 600,000 cycles. In the latter, 900,000 cycles.

Ironically, for the worst performing case, you had to optimise four times as much code to acquire a 10% gain. ‘We should forget about small efficiencies’ indeed – optimisation should be focused at the point where it’s needed, and where it’s most valuable.

Believe it or not (I will assume, not) this post isn’t actually about optimisation. That’s lengthy preamble that leads to my main point – it’s a natural tendency to design systems for the ideal case, but it’s bad practise. You don’t have unlimited time as a software developer, and your effort must be invested where it will provide the greatest returns.

‘Future Proofing’ is a phrase that is heard quite often in software development, and it means ‘writing code that can cater for any future requirements’. There is a lovely post[2] on how crazy that can get at http://chaosinmotion.com/blog/?p=622 – and again, it seems like this is a strawman constructed to prove the point, but it’s really not. Little coding stories like this are playing out every day across the world, and nowhere is that more obvious than in MUD development[4].

MUDs are addictive engines of fantasy for developers, because they give you free reign to do *really cool stuff* without needing a business case. Along the way, while developing a particular system you will be hit with the demon realisation ‘hey, if I incorporated , then I could do in the future if I wanted’

You need to fight that impulse.

Premature future proofing is just as bad as premature optimisation – it complicates the code you are writing *now* for a future that may or may not benefit from the proofing. Moreover, the chances are that you’re never going to get to the point where you write that killer feature, because you’ll have been distracted along the way by one of the thousand other ‘wouldn’t it be neat if in the future I could do…’ thoughts. When you give in to the impulse to future proof, a large proportion (let’s say… 80 percent of your effort) is spent on code that is entirely conjectural. Only 20% of your effort is spent actually making your game, the rest is spent making a future game that you’ll never get around to[5]. All your future proofing doesn’t matter because you never actually got to the point where you were ready to take advantage of all that hard work. ‘It’ll make it easier for me in the future’ is not a convincing argument when it’s making it harder for you *now* and in the future you may well have changed your mind anyway.

In the end, your future proofing is probably going to have been wasted effort for another reason – your future requirements will also change. On Discworld, a significant alteration to the skill tree was put in place, and the intention was that the tree be ‘future proofed’. As such, it contained a number of speculative skills that weren’t used but would probably be used in the future. Alas, that future proofing was wasted effort because those skills that were available didn’t have the necessary granularity or range to be useful to the people who the wrote systems to follow. Systems instead had to be designed to meet an often arbitrary ‘future proofed’ skill tree just because it’s a big deal to change. There are five distinct skills to handle MAKING A POT, and only one for SAILING A SHIP. Alas, the sailing system that was designed in the wake of this was far too complex for a single skill to handle.

Writing maintainable code is the best kind of future proofing – don’t assume that you’re going to need a particular feature, just write your code so that it can be altered with the minimum degree of consequence. Future proofing is a fool’s game because you’re the fool who thinks you can proof against the future. Even the future version of *you* is going to look back on your ‘future proofing’ with contempt.

Drakkos.

[1] I am assuming in these cases that the code you are optimising isn’t simply badly written. Badly written code can often be optimised without sacrificing (or indeed, while simultaneously *improving*) readability.

[2] Ironically, the post does *so well* for so long highlighting the insanity of future proofing, and at the end fails the dismount[3].

[3] ‘The really smart Java developer figures out the domain of the problem set, knowing (for example) that factorial is actually a special subset of the Gamma function. Perhaps the right answer isn’t any of the code above; perhaps the right answer is using Gergo Nemes’s approximation to Stirling’s approximation to the Gamma Function’. That’s *really* not what the really smart software developer does. The really smart developer doesn’t write code that needs a degree in maths to understand, because the really smart developer has enough of a track record of achievement that they don’t need to pointlessly obfuscate code to show off how l33t they are. The really smart developer writes code that everyone can understand, so that they’re not the only ones who can maintain it.

[4] See, it is a post about MUD development! I bet you’re glad you hung on this far!

[5] Just because it’s all text it doesn’t mean a MUD is any easier to write than any other game. 20% of your free time is not enough to turn your concept into a reality. You need to invest 100% of the time you have available[6] for your development *in* the MUD, not in the future MUD.

[6] 100% of your time often isn’t enough either.

  7 comments for “The Hubris Of Future Proofing

  1. Pingback: jershamned
  2. Pingback: jershamned
  3. Pingback: jershamned

Leave a Reply