Rework Avoidance Theory, or RAT, is likely slowing your team down more than rework ever would.
Let’s talk a little about that today.
I am writing these geekery muses in a time of great turmoil, but for the most part they’re not addressing that crisis. They are momentary respite, for me, and hopefully for you. They’re not the main story.
Stay safe. Stay strong. Stay angry. Stay kind. Black lives matter.
Rework Avoidance Theory is a cluster of related ideas seeing a change as having a clear start-point & end-point and a straight & stable path between them. It draws its inspiration largely from logic based in relatively straightforward metaphors to the physical world.
One metaphor is that of a footrace. The change is a well-marked track with one runner. It assumes 1) linear cost of step regardless of size, 2) stability and perfect knowledge of path and endpoints, and 3) indifference to post-footrace consequences.
A second metaphor is that of seeing the change as a finished product, built in standard parts by isolated individuals, assembled at the end. It makes similar assumptions to the footrace idea, but also assumes free cost of parallelism and high bonuses from specialization.
Metaphors are just tools for reasoning, and these two have broad application to a number of different kinds of enterprise. They’re not insane or stupid or malevolent. In fact, when their assumptions are met, they can produce excellent methods for change.
The problem we encounter when we apply them to software development is that those basic premises, the assumptions, are not reliably valid. Necessarily, then the conclusions we draw from using them are also not reliably valid.
One conclusion that occurs constantly is that our chosen method will be inefficient if it ever does the "same thing" twice. That phrase, "same thing", can have several different (and sometimes conflicting) senses, but RAT tends to take any notion of same thing and forbid it.
The proviso "no same thing twice" has lots of variants. The three I want to talk about today are "don’t learn about this twice", "don’t code this twice", and "don’t talk about this twice". All three encourage us towards larger batch sizes, and as such, slow us down dramatically.
The three of them are often intermingled, so I’ll just pick an arbitrary case for each of them, even tho my examples usually involve simultaneous application of all three in some proportion.
Meeting culture, where teams spend several hours a day in large sessions, is based heavily in "don’t talk about this twice". The idea is that we can transmit information more efficiently by gathering a group and transmitting each element of that information just one time.
This is the Smithian manufacturing model. It’s as if "transmitting information" was a workstation, hammering out the heads of nails that we’ll later put on the sharpened wires from the other station.
This model requires a great deal of assumed standardization, about the shape of the info, the nature of transmission, and the capacaties of the individuals involved. That assumption is, to put it mildly, nonsense.
Evolutionary development, at larger scale and at the micro-scale of TDD, is a recurrent target of "don’t code this twice" thinking.
When we do TDD, we "code this twice" in two senses.
- the test code in some sense restates the shipping code, a kind of second coding of the same thing.
- the tests develop the shipping code evolutionarily, meaning we’ll rework that code multiple times as we add tests.
At larger scale, with evolutionary development, it’s that second kind of "twice" that catches the eye: we change the same page, or json, or api, or endpoint, twice.
The assumptions in place here are twofold.
First, we assume a linear relationship between the size of a change and its difficulty. If "step" has any overhead whatsoever, and that assumption holds, then sure, taking one larger step would be better than taking three smaller ones.
But linearity doesn’t hold. In fact, the bigger the step, the bigger-er the effort of doing it effectively. This has to do largely with the strict human limitations on mental bandwidth, and partly with the practical unpredictability of implicit interaction effects in codebases.
The assumption that step-size can be reliably linear-correlated with step-effort is false. (It’s false in footraces, too, but that’s another story.) And because it’s so often false, we can never use it in our reasoning uncritically.
Secondly, we assume that the landscape between here and some change-endpoint is stable, well-marked, and further, that it is the only source of value. But, at the macroscale of evolutionary development in particular, this is, once again, not remotely the case.
Not only does the target shift, routinely, in response to the vagaries of the market, but the shortest path to that target also shifts, routinely, by way of new technology, new technique, and new insight.
And once again, in this logic, we both overstate the cost of "same thing twice" and understate the considerable value provided by small steps inherently, above and beyond the value they derive from being part of getting to the endpoint.
I’m tempted to blow past "don’t learn this twice". It has so much in common with the others I don’t think it’s worth much to keep going with it. But I’m going to hang in, just do it quickly.
Specialists get caught in this trap all the time: We need an A and a B, and you know A and I know B. Therefore, the most efficient way for us to do the work is for you to do A and me, B, then put our learning together at the end.
In many respects, this is the ultimate case of the Smithian workstation idea. It is based on the belief 1) That the two workstations can be synced and controlled for free, and 2) That learning is strictly instrumental to getting to the endpoint, a cost center, w/no other value.
We can dispense with "learning is instrumental and a cost center": To argue that cross-domain learning isn’t valuable is to argue against metaphor, which is literally the bridging of separate domains by concepts that have correlation in both of them.
If you were against cross-domain metaphor, you wouldn’t be coming at me with this footrace/workstation idea in the first place. 🙂
Is synchronization & control free? I doubt it’s free even in imaginary 19th century assembly lines, but it certainly isn’t free for computers or for software development teams.
Programmers in many-threaded environments will assure you that sync+control is staggeringly expensive. Any of them will speak of when they’ve improved performance by single-threading, and when they’ve created unfixable bugs by mis-reasoning cross-thread. All of them.
And managers running teams larger than a dozen people spend much of their day staring at the corrolary: screen after screen of Jira, meeting after meeting featuring Jira, page after page of defining the meaning of Jira. They often have it even worse than multi-threaded geeks. 🙂
Rework Avoidance Theory is a conlusion based on remarkably unstable and unreliable premises, and it is fed, in turn, into ever larger conclusions about how to work.
It consistently adds costs to software development in the name of "efficiency", and it does it at every level: in coding, in planning, in meeting, every level.
I often speak of micro-payments in low-level technical work, but RAT is not a set of micro-payments. It is a huge continuous tax, paid for by every part of most software organizations.
It is almost certainly slowing your team more than rework ever would.
Supporting The PawCast
If you love the GeePaw Podcast, consider a monthly donation to help keep the content flowing. Support GeePaw Here. You can also participate by sending in voice messages to be included in the podcasts. These can be questions, comments, prompts, etc. Submit A Voice Message Here.