If we’re living now and always have been in the Age of Blurgs, then our strategy might change accordingly. Before I go to some ideas from the modern synthesis that are targeted at blurgs & blurging, tho, I want to point at one thing that won’t work, and why.
What won’t work is "try harder", and why it won’t work is "because humans".
What I mean about just trying harder not to blurg being ineffective as a plan: blurging is not a failure of will or discipline. That is, blurgs don’t come from geeks goofing off or being lax or time-serving or any of those sorts of things. Instead of "try harder", the point of this set of muses is "try different". We’ll get to some different in coming days, but for now consider it foreshadowed, and let’s take a look at why "to blurg is human".
There are three reasons we create blurgs that are not ever going away, because they are embedded in the human and in the human processes surrounding coding. They are "bandwidth", "distraction", and "semantic expectation".
Bandwidth: we’ve been over and over this, but it’s worth a few sentences more for the noob. The bandwidth of the human mind — the number of independent concepts a person can hold active at one time — is extremely limited. In Miller’s famous paper about the magic number 7+-2, we first met this concept, back in the middle ’50s. Since then we’ve run thousands of experiments, and the consensus today is that a human’s mental bandwidth varies between 4 and 7 items.
Wait. What? four and seven? That, my friends, is a brutal limit, and it appears to be a biological one.
Human minds do crazy mad tricks to work within these limits, notably naming & chunking on the one hand, and narrative on the other. (when we get to "try different’ we’ll be taking advantage of these.)
But the hard limit? 4-7. Although we routinely operate in conceptual systems with thousands of concepts, moment to moment, we are constantly using tricks to fit small subsystems into our mental scope.
I know this isn’t obvious. Not to me, not to most folks. But it is one of the most documented results from all of cognitive science.
Distraction: this one has both an obvious and a subtle component. Distraction is really almost anything that has us looking away from a coding task.
The obvious component is simply the realities of human collaboration in any enterprise. Working days are chock-a-block with distractions. Everything from meetings to bathroom breaks. I’ve never met a geek who didn’t think they needed more time by themselves to rock code. But any working office is constantly breaking that time up. Sometimes the distraction is useful to the distractor, if not the distractee, and sadly, very often, it’s useful to neither. Staff meetings, anyone?
The more subtle distraction is just inside our little brains. The human mind is rarely still. (if it were, meditation would not be so valuable and difficult.) it, not to put too fine a point on it, never shuts up. It is constantly working to maintain your body, scratch itches, suppress or release emotion, predict the day ahead, analyze the day behind, and so on.
Because obvious distraction is so obvious, we tend to underplay self-distraction. But if you’ve ever had a quiet coding day where you couldn’t get anything done anyway, you’ll have felt the burn of subtle distraction.
Semantic expectation: this is basically seeing what you expect to see regardless of what is actually there. Like the others, this a hard biological fact, not a failure of will.
One of the tricks humans use to parse systems that are larger than 4-7 ideas is narrative. We build a narrative of the recent past, and we use that narrative to parse the near-term future. It is a brilliant trick, and fundamental to human cognition.
Essentially, we use our narrative to create meaning, and we use that meaning to interpret what our senses tell us. That interpretation simultaneously enables us to think at all, and forces the sensory data to fit our thinking.
Semantic expectation is the act of letting the meaning-we’ve-made create expectations that subvert the meaning-that’s-there.
There are a million million jokes, riddles, and tricks that turn on semantic expectation.
But it’s not a joke, it’s real.
We have professional proofreaders instead of ourselves because they’re better at being blind to our own semantic expectation.
Anyway working near our justice system knows how wildly untrustworthy eyewitnesses are: because they have semantic expectation, they see what they expect to see.
If you’ve ever found a huge flaw during a technical review, you will know what i’m talking about. How could I not have seen this stupid thing? Easy: the semantic expectation kept me from seeing it. It’s an absolutely everyday experience for most geeks.
So where are we at?
We live in the age of blurgs, of simple disconnects between what the coder thinks the code says and what the computer thinks the code says.
We can say, "i will not blurg today", but it’s to no avail. We will blurg today, and every day, for reasons having nothing to do with how hard we try not to: bandwidth, distraction, and semantic expectation, among others. We need strategies that are stronger than "try harder".
And in the next little stretch, we’ll look at some "try different" options, both explicit and implicit in the modern synthesis.