Let’s define blurgs.
A ‘blurg’ is an event where a programmer understanding of "what my code does" is not "what my code does".
Blurgs are an everyday part of programming, of course. We are more often "interim" than at a sign-off point. But I mean to restrict for this conversation the idea of a blurg to being a programmer-program disconnect that occurs at a sign-off point. (a sign-off point is different for different teams. Most of the folks I work with regard a source code push into the repo as a sign-off. Others say the real sign-off points are only the days we go live. Some orgs use many layers of sign-off to make it hard to sign off. Whatever.) at its simplest, imagine me saying, "when it runs it prints 3," but when we run it it actually prints 4. That’s a pretty bog-standard blurg.
How important are sign-off point blurgs in the grand scheme of things?
Years ago, stephen gould remarked that the "Age Of X" trope for periods of evolutionary history, like "Age Of Mammals", "Age Of Dinosaurs", was basically just silly. It is now and always has been the age of bacteria. There is no measure for which this is not true. More individuals, more species, more range, more survivability, more biomass, the list goes on and on.
It is my contention that we are now and always have been living in the Age Of Blurgs.
It’s not the age of procedures, or of objects, or of immutability, or of functionals. It’s the age of blurgs.
There are lots of great fail stories in computer science. I have two myself that I find — at a distance of a few years — hilarious. Fail stories tell us about lots of brilliant ways we can lose in the software for money game.
You can fail by not talking to customers. You can fail by getting entangled with your org’s Software Prevention Team. You can fail by not considering that the internet is sometimes offline. You can fail by not considering transactions per second, or by never trying your facial recognition software on people who aren’t white. You can even fail intentionally — I have seen that.
But, in my view, the overwhelming majority of failures in software for money come from just this one thing: a sign-off point blurg.
And when I say overwhelming I really mean overwhelming. Yes, integration problems happen, or we misunderstand the requirement, or "they" misunderstand the market.
Take the security flaws headlines. Many recent exemplars involve extraordinarily complex analysis of things like pipelining or kernel mode. It’s rather brilliant, actually.
But the lowly buffer overrun, and there are no buffer overruns that aren’t blurgs, still exposes millions of devices every day to rootkits and worms.
Imagine you shipped a buffer overrun. I have, one of my two great failure stories is just that: a 1K buffer should have been 2K, and the app wiped the user’s drive every 1024 uses. For eight weeks. It was in the field for eight weeks. Thirty years later I still wince a little. Purest blurg. Had you asked me "what my code does", I can assure you that I would never in a million years have answered, "it wipes the hard drive every 1024 runs". Never.
One description i’ve offered of this movement is that we’ve spent 20 years expanding our focus to include not just the made, but also the making, and that we are now expanding that again to also include the makers.
How, then, expanding our vision to include the makers and the making, should knowing that we live in the Age Of Blurgs shape our behavior?
I strive to answer this question in a bunch of ways in a bunch of different places. Today, tho, I just want you to consider the question: what does living in the Age of Blurgs mean to us, to how and what we teach and learn and actually do in the trade?