My five TDD premises stuff has been well-received over the months since I put it out, but one of them seems still very underplayed, even by many died-in-the-wool TDD’ers: the correlation premise.
The correlation premise says that the internal quality of our code correlates directly with our productivity. When the internal quality goes up, productivity goes up. When it goes down, productivity goes down.
All of the premises were formulated in part to undercut dangerous misunderstandings about what TDD is and why we do it, of course, and the correlation premise is that, too.
The misunderstanding: that the quality of the code can be traded for the productivity of the coder, that these two dimensions are orthogonal and antagonistic to each other.
Evidence that this misunderstanding is still in play is plentiful. And while I certainly can see how folks new to the made-making-maker triple focus of our movement misunderstand, i’m surprised on a regular basis by "agile advocates" who are still embedding it in their work.
Even in casual chatter, shoot-from-the-hip conversations, numerous folks who I think of as inside the movement strongly imply they still are binding themselves to the orthogonal/antagonistic vision of code quality and productivity.
Let’s work this out again, as simply as possible.
The nature of programming is always this: at time T I have code C, and at time T’ I want to make code C’ from it, so I make changes to C until it becomes C’.
Right? No demurrals yet?
That’s what programming is, it’s changing some code C to some code C’. It is never not that.
Consider a program that prints out the integers from 1 to 10 inclusive. Call that code C.
There are myriad ways to do that, but i’ll pick two.
In codebase C1, we have 10 print statements in a row, each with a constant argument of the correct integer. In codebase C2, we do it with a variable in a for loop. Each pass through the loop, we increment the var & print it.
Now consider program C’, our target program: it has to print out the integers from 1 to 100 inclusive. I want you to think about the difference between changing C1 to do the new C’ and changing C2 to do it.
Which one do you think will be easier to change?
Well. U don’t have to write many baby programs before you agree that changing C2 will take less time than changing C1. Both programs implement C. Both programs can be changed to implement C’. But one of them is faster to change than the other.
The entire meaning of "internal code quality" is exactly that. ICQ is precisely synonymous with "faster to change".
Now. I know that the various things we talk about as ICQ have at least three properties that can make this a difficult step for us as analysts. Let’s make sure we mention those right now before this thing goes off the rails.
- There is only modest consensus about what the actual details of ICQ practice are. The trade has nearly unanimous agreement, for instance, that GOTO’s are usually negative. No such agreement exists around, say, immutability.
- The trade’s changed even its consensed-upon mind, several times over the years. Structured programming’s single-exit policy, hungarian notation, there are lots of these now-largely-abandoned practices that were once considered fundamental ICQ.
- Horrific "enforcement" projects abound in the trade, simultaneously pushing both well-known ICQ positives and well-known ICQ negatives pretty much willy-nilly, and forbidding at considerable inefficiency any sort of maker-centric usage of judgment.
But all we are saying in all three of these is really just this: we are still figuring out what ICQ really is. That’s not an argument against the premise, that’s an argument for efforts to get better thicker stronger senses of ICQ.
We want to ship more value faster. We don’t care if that value is more features, more performance, more stability.
For all three of these, there’s a C and a C’, and if this starting C is easier to morph in to C’ than that starting C, we’d rather start with it.
If all the possible this starting C’s have shared attributes, we call those attributes ICQ, and when we create all our new C’ code, we want that code to have those attributes.
If you shudder at a goto, or a 5000 line method or an early return or a late return or any typographical feature of a codebase, well, hell, you already bought in to the correlation premise. Now we’re just haggling over what is and isn’t ICQ.
ICQ is meant to be entirely synonymous with "faster to change". We are still defining it, and will be for the foreseeable future. But if you care about productivity, you care about ICQ. If you want to go faster tomorrow, you care about the quality of the code you wrote today.
ICQ has nothing to do with the definition your customers have of value. It’s not about fewer bugs or more. It’s not about fewer features or more. It’s not about slower code or faster. It’s about going from C to C’ at the maximum possible speed, regardless of the value-increment.
If you want to get inside the TDD mindset, you have to think of TDD as an approach to maximizing the production of code changes, regardless of what those code changes accomplish for the user.
And if you advocate TDD, I would recommend saying this loudly: TDD is not about any aspect of customer experience other than the speed with which we can offer improvements to it.
I can go faster by making C’ less valuable to the user: incomplete, slower, unstable. I can do that, have done that, will do it again.
But I can’t go faster by slowing down the rate at which I can change code. Not ever.
That’s the correlation premise.