Microtest TDD is an effective change strategy because it dramatically improves our performance at comprehension, confirmation, and regression detection, all critical factors in handling change quickly & safely.
I know how comparatively little geekery matters right now. Sometimes I need a break, and maybe you do, too, so I share.
Black lives matter. We can fix this. We’re the only thing that can. Stay safe. Stay strong. Stay angry. Stay kind.
We’ve covered a lot of ground in considering TDD recently. The five premises, the need for a change strategy, the increasing collaboration, complication, and continuity of modern software, and some high-level less-obvious aspects of the practice itself.
I want to put some of these pieces together, and show you how TDD adds up to an effective change strategy.
It doesn’t "boil down" very well: as with most conversations around complex systems, the effect is multi-sourced and multi-layered.
This may take us a little while. 🙂
When we go to change code — and remember, adding features is still changing code — there are 3 limiting factors in our performance: 1) How well do we grasp what’s there? 2) How quickly can we confirm our change works? 3) How quickly can we confirm our change doesn’t regress?
Every way to write software can be viewed as a mixed package of constraints and freedoms. Microtest TDD’s package is different, both from the old-school approach and from test-later approaches. Those different freedoms and constraint are what give TDD its comparative value.
Remember twinning? That’s where we build two apps from (approximately) the same codebase, one shipping app and one making app. And it’s that making app that is giving us our value, both when we are using it and when we are building it.
Most of the freedoms come from using the making app. Most of the constraints come from how we build it. (I sometimes call this its artifactual vs operational value).
1) The making app doubles the visibility of developer intention, for every alternating pair of shipping part and making part. These two parts, in effect, say the same thing about the code in two different ways.
You can determine intent by looking only at the shipping part, especially in simpler applications. But the testing part adds a layer of intention that is focused less on how the code does its job and more on what that job actually is. What it’s for, not how it’s done.
This has obvious benefits for comprehension, heightening the speed and useful of our collaboration, not only with others, but with our prior self, as well.
(If you’ve never said to yourself "what the hell was I thinking?", I want to take a moment to welcome you to your first week in the software trade. It’s a wonderful business, full of exciting opportunity! I am sure you’ll go far.)
2) The making app gives tremendous reduction in mental scope. Remember, it puts individual shipping parts or small assemblies on separate microscope slides. This relative isolation means that a would-be changer has very much less to think about at any one time.
Microtests are purpose-built to fit within the rigorous and well-documented limits of human mental bandwidth. They keep the number of independent mental entities within those limits, by design.
Each microtest tests one hypothesis. It doesn’t take the car out for a spin. It checks the tire pressure. Or it checks the ignition switch. Or it checks the brake. Or or or. This reduction of mental scope is very likey the single biggest impact of successful microtest TDD.
3) That same isolation is what allows the making app to be so much faster than the shipping app. In most environments, all of my microtests together take less time than firing up the shipping app and clicking on the first button.
The confirmations that we get from the microtest, as we’ve said, don’t tell us that the app is the right app or even that it overall works. But they do tell us a given shipping part is doing exactly what we wanted it to do, and they do that in milliseconds, not minutes.
4) The making app develops as the shipping app develops, iteratively, incrementally, and evolutionarily. This process leans heavily into two huge values for the humans that are developing: it yields profluence and rhythm.
Profluence — a sense of flowing forward — is one of the great motivators of human activity. The rapid cycling between the shipping part and its corresponding making part "adds up", providing, well, simply put, increased satisfaction.
And the rhythm, alternating tension & release, does the same thing. It provides a beat to the work, and every time we release our tension with a green bar, we give ourselves a tiny jolt of dopamine, providing, again, increased satisfaction.
Now, there are certainly people who’ll tell you that the satisfaction level of the developer is not an important factor. Those people are mistaken, and we have well over a century of research establishing that beyond any question.
5) The making app leads to tightly directed debugging, which is both easier and faster. Debugging will always be with us, but TDD’ers consistently report both fewer and shorter debugging sessions when they’re using TDD.
It’s because of the speed and the isolation provided by the making app. Have a theory about some problem in the shipping app? We add tests — hypotheses about shipping parts — in typically a minute or two. And we can fire up the making app in seconds to confirm or reject.
6) The making app forces design constraints in the shipping app, and those design constraints align very nicely with our most current sense of "good design".
We talked about old-school design theory and the relative infrequency of change, but that was only relative: many of those designs had built-in support for planned change. They had a great deal of expected cold-swap support, for different hardware, for multiplicity, and so on.
Microtest TDD honors every one of those design principles, but with an assumption of unplanned change. And it not only honors them, it actively reinforces and sometimes even requires them.
You can’t microtest TDD bad designs: large classes, God classes, multiple responsibilities, heavy implementation inheritance, bad names, long complicated functions. Putting it bluntly, TDD’d designs are consistently "better", in those old-school terms.
Standing back now, do you see the connection between each of these factors and a higher performance at comprehension, confirmation, and regression detection?
As I wind down, I want to offer once again an important caveat: TDD is a rich & complex skill, not a simple add-on technique, but a whole different way of seeing the problem of change.
It takes learning, and it takes practice.
Tho I’m actually kind of tired of writing about it, in coming muses I want to talk about some problem-patterns: common situations we encounter as TDD’ers where "naive" TDD will get us into trouble. For those, we need to add answers & alternatives to our technical repertoire.
Learning those isn’t free, and especially learning those while we’re on the job isn’t free. I want to move towards an approach to TDD adoption that gets those into our toolkit in a rapid & smooth way.
Microtest TDD is primarily about effective change. It magnifies our performance by easing comprehension, confirmation, and regression detection. It does this in a variety of ways, with much synergy between them.
It is an effective strategy for change, and we need one.
Support GeePaw
If you love the GeePaw Podcast, consider a monthly donation to help keep the content flowing. You can also subscribe to get weekly posts sent straight to your inbox. And to get more involved in the conversation, jump into the Camerata and start talking to other like-minded Change-Harvesters today.