Of all the bad advice the geek trades inherit from over-simplified manufacturing & engineering, perhaps the worst: "avoid changing code".
Take a minute, and think of all the things the trade does to avoid changing code. Virtually every element of "big design up front" policy is based in this view. Long stretches for analysis, longer ones for design. Planning cycles that extend years into the future. Years. I am not exaggerating for effect. Years. Contract-like specifications, database designs that start with hundreds of tables. So-called security systems that make exposing an endpoint a six-month four-committee process. All of the variety of viewpoints we call endpointing — trying to create a complex system by drawing a straight line from here to there. Think of this huge investment. And it’s all because we are trying to avoid changing code.
Why? What’s so bad about changing code? There’s a long list of potential answers to that question. But they seem to fit conveniently into two rationales.
- Rationale 1: Changing code is bad because it means doing the same work twice. (An inheritance from manufacturing faux-efficiency.)
- Rationale 2: Changing code is bad because it introduces potentially wild uncertainty. (An inheritance from engineering faux-science.)
The argument from efficiency is rooted firmly in construction. If you don’t dig the basement now, it’s far harder to dig it later. One hears that exact metaphor with some frequency. And it has some application to software, but not much. There are a small number of things it’s best to implement first. These take the form of things we think geekfully of as "aspects" in aop.
Localization is a good example. The later we put off localization, the harder it gets to do. But most of software isn’t like digging a basement. Most of software isn’t aspect-y like this, touching everything everywhere. Most of the things we worry about when we endpoint our systems are things the geeks could change any time we wanted at very low price. Now, let’s be careful here. Not all geeks. 🙂 People trained heavily in change-avoidance need to be re-trained before they can do this. Code bases mired in change-avoidance need to be re-worked before this can be done to them. The point is, if we’re not avoiding code change, code change from an efficiency point of view is basically net-neutral. (More later).
What about the argument from uncertainty? The gist: "Don’t touch that, you don’t know what it’s connected to." Whenever I hear this, I think to myself, ummm, gee. Why don’t I know what it’s connected to? The idea’s that computer software is a dark mysterious place, full of indeterminacy and randomness. Changing code means radical uncertainty. This is, to put it as mildly as I can bear to, "not true". Computers work because they do the same thing in the same situation every single time. That’s the exact opposite of uncertainty.
To be sure, them doing the same thing in the same situation every time can confuse humans. This happens because we think the situation is the same but the computer does not. Computers are very picky, people are loosey-goosey. But changing code doesn’t introduce uncertainty, it reveals and heightens uncertainty that was already present. Another way to say this: If you know exactly what the "before" does, you can easily guarantee that the "after" honors the parts you keep. Same provisos as before. Geeks need to learn how to do this, and codebases need to be built/altered to support it.
My bottom line here: "Avoiding code change" is fabulously expensive and entirely unneeded. It promotes neither efficiency nor certainty. I mentioned before that the efficiency difference between changing and not-changing code is nil. If we left it there, why choose code change? Because code change enables continuous harvesting of value, and avoiding code change disables it. Efficiency and certainty are certainly related to value harvesting, I wouldn’t argue with that. But they are not the only factors. In my forthcoming series on optimizing for complexity, I offer a sub-problem: Optimizing for sustenance.
The key idea here is that orgs can sustain themselves while they develop software by frequently harvesting every bit of value they create. This comes down to continuously revealing increasing capability to users and using their feedback interactively to drive and feed our efforts. Avoiding code change can not do this. It suppresses exactly the attitudes, techniques, and experience we need to harvest value interactively.
Umpty-nine years ago, when all these old agile guys were young and less sullen, a guy named Beck wrote a book that was foundational. The name of the book is "Extreme Programming Explained", and it has a particularly revealing subtitle: "Embrace Change".
Embrace change. In code, in people, in process, in market, in model. Embrace change.
Shouts out to friend @kentbeck. For all the changes since, all the wrangles and aggravations and owwies, you hit that nail square on its head.