Sticky Change: “Changers Feel Better”

yesterday, this popped out.

today i want to elaborate a little on that second point, that sticky change happens when the changers feel better. all three words of “changes feel better” carry weight, have subtleties, and present possible fail points, so let’s look at them one at a time.

changers are the people who actually have to make the trip from one behavior set to another across time. changers are persons, individual humans, who are doing X now and who we are hoping will do Y soon. as such, they are a source of rich variety that has huge implications for the stickiness of any given X->Y change.

if we did a spectrographic analysis of the changers, we’d find that each one shows a different fingerprint, along measures like “tolerance for change”, “current satisfaction”, “attention available to give”, “persistence when failing”, and on and on the list goes. trends exist, patterns, there are bell curves in each of those measures, but a fail point emerges when we — seeking change for others — hold too tightly our attachment to whatever model we have of the general mass. changers are not a general mass, they are individuals.

“better” is about the relative state of those individuals, when they were doing X before, and when they are trying/doing Y now. the single most common fail point i see in change-seekers is what we call “letting best be the enemy of better”.

wanting the new behavior Y to be ever and always the finished final best way to do things would be fine, *if* that desire were not itself often the very thing preventing its own accomplishment. sadly, it is. choosing best vs better often leads us to failure in two ways. 1) it makes us spend more time arguing about best. 2) it makes us want to make larger changes than the human changers can integrate over time.

and we come to the feel. “changers *feel* better”. this is very hard to get at. not only is feeling even abstractly a very slippery concept, but if you look around yourself right now, you’ll see that even your own feelings are fluid, flickery and flummoxing to you.

i suppose what i’m saying with that word is about a vague general response to the new behavior set Y that’s shaped this way: “i like doing Y more than i liked doing X.”

the obvious fail point in sticky change is simply that that is *not* the vague general response. when asked, the changers say they don’t like doing Y more than X, or worse, they actively dislike it compared to X. but that’s just the obvious one. there are lots of little possible losses here.

  • it happens sometimes that some, even many, like Y better, but that some, a plurality don’t.
  • it happens sometimes that we don’t like Y better for reasons having nothing to do with the value Y was supposed to offer.
  • it happens sometimes that a person likes Y better but actually does something slightly Y-shaped that isn’t Y.
  • it happens sometimes that a single Y-disliker can cast a very long shadow over a group that actually likes it.
  • sometimes “changers feel better”, but things aren’t actually better.
  • sometimes change-seekers feel better but changers don’t.

all of these are chances for us to fail at getting to sticky change. navigating them is exactly the art of coaching. it is why being an expert in agility is not the same as being an expert at coaching.

the secret to sticky change is getting as early and often as possible to this one state: “changers feel better”. the secret to being a coach is sidestepping fail points and aligning the various elements in such a way that you provide value to your clients. it’s all a great mass of nuance and sensitivity, and being a coach means willfully engaging with it. it’s a fascinating field of endeavor, if one has the patience and the good cheer for it.

The Technical Meaning Of Microtest

i write things called “microtests” to do my development. doing this yields me greater success than i’ve had using any other technique in a 40-year career of geekery, so i advocate, wanting to share the technique far & wide.

before we can talk seriously about how or whether this works, we need a strong grasp of what a microtest is and does, from a strictly *technical* perspective, w/o all the trappings of the larger method and its attendant polemic.

a single microtest is a small fast chunk of code that we run, outside of our shipping source but depending on it, to confirm or deny simple statements about how that shipping source works, with a particular, but not exclusive, focus on the branching logic within it.

i want to draw your attention to several points embedded in that very long sentence. it’s easy for folks to let these points slide when they first approach the idea. that’s natural, we always bring our existing context with us when we approach a new idea, but if we underplay their significance, it’s easy to head down both theoretical and practical dead-ends.


first, we have microtest “externality”, the property that says the microtest code runs outside the shipping code.

this is a snap from a running app. the app’s called contentment, and when it runs it draws this diagram, interpolated over time, as if i were drawing it by hand.

this, on the other hand, is a snap from a microtest run against the contentment app:

why do they not resemble each other at all? because the app is the app and the microtests are the microtests. they are two entirely separate programs.

Source-Only Relationship

second, and this is a little buried, the connection between these two separate apps exists at the *source* level, not at runtime.

the microtest app does not launch the app and then poke it to see what happens. rather, *both* apps rely on some shared code at the source level. the app relies entirely on the shipping source. the microtests rely on the shipping source and the testing source.

this is very important to grasp very early on. in some testing schemes, our tests have a runtime connection to an instance of our app. they literally drive the app and look at the results.

this is not how microtests work. nearly every modern programming language comes with the ability to use a single source file in two different apps. microtest schemes make heavy reliance on this capability.

Branchy Focus

third, microtests focus primarily on branchy logic within the shipping source, with secondary attention to some numerical calculations, and sometimes a small interest in the interface points between the shipping source and the platform on which it runs.

here’s a file, SyncTest.

This is a microtest file that uses the shipping source’s Sync class.

the Sync object has just one job, to have its interpolate()  method called. if the beat is less than the sync’s target, return true (so that interpolate()  will be called again). otherwise return false (so the caller will stop asking).

the Sync object’s behavior *branches* logically. there are two branches, and there are two tests, one that selects for each case.  (this example is real, if against a rather straightforward responsibility. in the full context, tho, correctly syncing the drawing to the video or audio source is actually the whole *point* of the contentment app.)

microtests mostly focus on just this: places where *our* code changes its behavior, doing one thing in one call and another thing in another call. we also use them for testing calculations. contentment does a lot of coordinate math. the part of contentment that does geometry has lots of mostly non-branchy code & plenty of microtests to make sure it does its basic algebra & trig the way it’s supposed to.

sometimes, though not often, microtests will focus on the interface points between the shipping source and its platform. the contentment app uses javafx’s animation framework. there are a couple of classes that serve this functionality to the rest of the app. the microtests here essentially establish that this interface point works correctly.

a more common example out in the wild: a bit of SQL is richer than just ‘select * from table’, and we write some microtests to satisfy ourselves that that bit of SQL gives us the right response in a variety of data cases.

Active Confirmation

the fourth aspect of that long sentence above is caught in the phrase “confirm or deny simple statements”. microtests actively assert about what happens in the shipping source.

microtests don’t just run a piece of the shipping source and let the programmer notice whether or not the computer caught on fire. rather, they run that piece with a particular data context and (partially or fully) validate the results of the call.

in the SyncTest file above, the line that says

is calling the interpolate() AND checking that it answers false.

microtests are active, not passive.

Small and Fast

the fifth and final (for now) aspect of microtests is that they are small and fast.

a typical java microtest is well under 10 lines of code beginning to end. it runs in milliseconds. it tests only a single branch. it doesn’t usually assert everything about the results, but usually only one or two things about them.

the SyncTest above seems trivial, because Sync’s responsibility is, however critical to correct functioning of the app, trivial. *but*. the average microtest in contentment is five lines long, longest is 20, most of which is taken up by 12 lines of try/catch java noise.

Industrial Logic’s great app for serving online instructional material is a good case. its 1200 microtests eat under a minute.

These Are Microtests

so there we go. five important properties of microtests are 1) externality, 2) source-only relationship to app, 3) branchy focus, 4) active confirmation, and 5) small size and high speed.

if we’re talking about microtests, we’re talking about tests that have these properties.

there are many questions waiting in the wings from this starting point. among others, “why on earth would i do this?” and “how can i do this when my code doesn’t want me to?”

we can get there. but we can’t get there without remembering these properties.

TDD & The Lump Of Coding Fallacy

Hey, it’s GeePaw, and if you’re just starting to look at TDD, refactoring, the modern technical synthesis, we need to start with a couple of minutes about the Lump Of Coding fallacy.

You’re a working geek: you spend your days coding for money to add value to your company. And one day some random schmoe like me comes up to you and says, hey friend you really ought to try TDD, because that value that you’re adding, you could have added even faster if you used TDD. So, you check it out, because sometimes random schmoes actually have a clue, and you see pretty quickly that TDD means making lots of automated tests against your code. So what do you do?

Well, you back away slowly and you casually slide your hand into your pocket to get that little reassuring grip on your pepper spray, just in case.

Why is that? Well, it’s pretty simple really.

Look, you say to me, I already spent all day rolling code to add value. But if I do TDD, I do automated tests, and try to keep up with me here, doing automated tests is also rolling code.

So, if I adopt TDD, I suppose you think I’ll just spend all night coding the tests that cover the value I spend all day adding. Look random schmo, in the immortal words of Sweet Brown Wilkins,

“Ain’t nobody got time for that.”

OK. OK, my random schmo response to this is to suggest that your analysis is off. And specifically, that it suffers from what we call the lump of coding fallacy. You think of what you do all day as a single behavior, coding. A large undifferentiated lump of work. And you think that TDD means adding to that one lump of coding a whole different equally large lump in order to make the test.

Three Things We Do During A Coding Day

Here’s the thing, I don’t think what you do all day is a single behavior. I think it’s made up of several different behaviors. Let’s take a look.

  • One thing we do, we actually program the computer, and there’s two parts to that. There’s actually changing the source and then there is what we call designing, which is imagining forward how we’re about to change the source. That’s programming the computer. For many of us, it’s the best part of our day.
  • Next, we study. It’s not possible to change the source without knowing something about it. And the studying is how we know something about it. Again, there’s a couple of different parts. We scan source, which is flickering quickly from place to place. And then we read source, which is far more intense. It’s more line by line, a deep analysis of what’s really going on in the code. So that’s study.
  • The third thing we do during a coding day is what we call GAK activity. GAK is an acronym. It means geek at keyboard. And what it means is, running the program in order to accomplish some end or another. In GAK inspection, we’re running the program to see how it works right now. In GAK testing on the other hand, we’re running the program again, but this time to see whether some source change that we just made had the desired effect. And of course, there’s always GAK debugging, where we’re running the program, this time in debug mode, or print statements, or whatever, to see why that source change did not have the desired effect.

Two Points For Later

Now, before we go any further, I want to make sure you know two key aspects of the TDD world. First, when you’ve adopted TDD every source file really becomes a pair of files. One holds the source code we ship, and the other holds the source code for the microtests. And we use both of these files all the time, because they both offer us a great deal of information. That testing code forms a kind of scaffolding around the shipping code and we’re going to take advantage of that.

Second, TDD uses very small, very fast tests called microtests, and it runs them separately from running your entire application. The reason we can get away with testing only parts of the app, is because what matters most about our work is the branching logic that’s in it. And that’s the part we test most heavily using microtests. We run them in a separate app for speed, selectability, and ease of use.

So, take those two points and set them aside. They’re going to become important here in a minute. Hold onto them.

The Proportions Of The Three Activities

OK, let’s go back to our three activities. So, take these three things together, changing codes, studying code, and GAK activity, and you see there isn’t just one solid lump called coding. There’s all these activities. Of course, they’re totally intermingled throughout the day. And that’s why we think of it as a big lump. The truth is, they actually take up very different proportions of our programming day.

Programming the computer, the best part of the day, is often the very smallest part. The GAK activity, much of which is just waiting around for things to run, or clicking through screens and typing in data in order to get to the part where you wanted to see something, that is the largest part of the day by quite a bit. And studying, the scanning and the reading, well, it’s somewhere in the middle. So those are your basic proportions.

When TDD tells you that writing automated tests will make your life better, the lump of coding analysis of the idea is both right and wrong. Lets grow this picture a little bit. We’ll call what we’ve got now before TDD, and then I’m going to disappear, and we’ll put after TDD over on the right.

The After Picture

The lump of coding fallacy is absolutely right about one thing, automated tests are more code that has to be written. Somewhere between half again as much and twice as much as you write now. Let’s say that part of our day doubles. On the other hand, the lump of coding fallacy is totally wrong about the rest of the picture.

First, study time will go down after TDD. It’s not that we have to study any less code in the after picture than in the before. Rather, it’s that studying the same amount of code gets faster. Why? Because those twin files we talked about, one with shipping code and one with testing code, it’s almost like the test code forms a kind of Cliff’s Notes for the shipping code. A scaffolding that makes it easier for us to study, and this makes it far easier to tell what’s going on. This will cut our code study time in about half.

Finally, we come to the GAK time, and this is the big payoff. TDD reduces the amount of time you spend in GAK by 80% or 90%. Because TDD tests run in that special tool kit. They’re fast. They don’t fire up your application. They don’t depend on things like logins, or database permissions, or waiting around for the web to load. They are built to be fast, small, and grouped into convenient suites. Nothing completely eliminates the need for GAK work, but TDD slashes the amount of time you spend GAK-ing during the course of the workday.

So, when we look at the left and the right, from before TDD to after it, you can see it for yourself. We write more code, automated tests. And far from losing our productivity, we actually gain it. All this takes is for you to look past that single lump of coding fallacy.

GeePaw Advises

OK, it’s time for my advice. The first advice is the same advice I always give. Notice things. In your next full working day, notice how much time you spend in the three behaviors, actual programming, code study, and various GAK activities. Once you see it yourself, you might want to consider making some changes.

The second thing, well, TDD doesn’t come in a day. It takes some lessons and some practice. There’s a lot of course material out there, including mine. Watch some videos. Read a little, and really try the various exercises. Start with some toy code. Almost anything will do. Then find a small problem in your day job that has few or no dependencies on other classes. Do this two or three times. And again, notice what happens. If you like the result, well, at that point, you’re ready to get serious about TDD. And then, well, we can take it from there.

So, I’m GeePaw.

Drop that Lump Of Coding fallacy,

and I’m done!

How Long (Technique Remix)

how long (redux)?

in the technical side of the modern synthesis, we develop code by writing and passing tests then re-working the code to make it as change-enabled as we can.

the key “how long” aspect to this: how long does the code stay not working? that is, how long in between the times we could push the code straight to production? the desiderata in the modern synthesis is that we try to measure that number on a scale using only two digits of minutes.

it’s as if we’re playing a game, and the object is to keep the application running correctly even as we are changing it. over time, as you get stronger and stronger at it, this becomes an obsession for the geek. it’s a great fun game, btw, and an old hand like me derives much pleasure from the challenges it presents. like many others, i not only like playing it for real in my codebase, i also enjoy playing in my imagination for other people’s code. 🙂

but of course, it’s not *there* just for the kicks it gives us. there are several benefits it provides for a team that seeks to ship more value faster.

  • first and most importantly, it narrows mental scope. recall that managing this, the number of conceptual balls you’re jugging in your mind, is a key part of being able to move quickly in code.
  • second, still valuable, it reinforces a deep commitment to “always shippable”. the drive for ways to change the code in small chunks without breaking the app frees us from complex source-branching strategies, and lets the value-defining teammates “turn on a dime”.
  • third, it has its own indirect effects on the code base. code that’s built this way over repeated applications of the process is almost insensibly becomes fully change-enabled in to the future.

anyway, the effects of working this way are profound, and to some extent they define the technical side of the modern synthesis. this lunch isn’t free, of course, it involves coming up with solutions to at least three different kinds of problem.

the first problem is controlling how the change is exposed to the user. largescale changes in code don’t happen in ten minutes. that will mean there’ll be a time when a given new feature or changed behavior is building in the codebase, but shouldn’t be visible to non-dev users.

in a pure “add” situation, there’s a pretty basic solution: some kind of feature or rollout toggle that makes the added functionality invisible in one state and available in the other.

pure adds are relatively rare, though. most of the time, practicing evolutionary design, the code we’re changing is spread through multiple locations in the code. here things become more interesting.

two of the original design patterns come in handy pretty frequently here. 1) the Strategy pattern makes it easier for me to supply ‘pluggable’ behavior in small units. 2) the Factory pattern lets client code magically get the *right* strategy depending on toggle values.

a caution: the road to hell is lined on either side with “one ring to rule them all” strip malls that will make you want to only ever use one technique for controlling user exposure. we know what lies at the end of that road. 🙂

the second problem is the certainty problem. how can we be certain we have not broken the app as we make our changes?

the toggle-ish solutions from before can aid us here, *if* we have very high confidence that the toggling itself is idempotent. that is, if putting the toggle mechanism in doesn’t *itself* break the app.

a preferable solution, usually faster, once you’ve set yourself up for microtesting and refactoring, is to bring the app’s business logic under a high degree of scrutiny with microtests.

microtests are exceptionally cheap & powerful in asserting “this is how it works now”. if the awkward parts of what’s being tested are isolated, we can microtest around considerable chunks of the code. they give us *tremendous* confidence that we can’t go blindly backwards.

(note: that’s not certainty. there is no certainty. that’s confidence, though, and confidence goes a long way.)

the third problem we have to solve is sequencing. how can we chain together a bunch of steps so that a) we get there eventually, and b) the outcome of each step doesn’t break the app and c) we can still follow a broadly coherent path?

sequencing is the meat and potatoes for the refactorer, and when we change code, we very often also have to refactor what’s already there.

sequencing very often involves inventing steps that are “lateral” rather than “towards”. what i mean is, effective sequencing often involves steps that don’t immediately aim at the endpoint.

an example: it’s not at all uncommon to do things like add a class to the code that you intend to remove from the code four steps down the line.

instead of twenty-five steps, all towards the endpoint, we take one side-step, four steps towards, and one more side-step at the end. fewer steps is better if they’re all the same size. (forthcoming video of an example, in my copious free time.)

learning the modern synthesis — the technical side at any rate — is very much about just learning the spirit of this approach and mastering the various techniques you need to move it from on-paper to in-practice. it’s not an instant fix. learning this takes time, energy, and great tolerance for mis-stepping. but the payoff is huge, and in some of these cases it can also be nearly immediate.

it all begins with this: “how long?”

how long will the code be not working?

pro-tip: shorter is better.

How Long?

how long?

what amount of time passes between you saving the file you just changed and you seeing the results of that change?

if that answer is over 10 seconds you might want to wonder if you can make it shorter. if that answer is over 100 seconds, please consider making a radical change in your approach.

thinking is the bottleneck, not typing, we’ve been over this a bunch. but waiting you can bypass — *all* waiting you can bypass — is waste.

i know firmware folks who routine wait 20 minutes from save to run. why? because they are running on the target, and that means sending the image to the target, stashing it in the NVRAM and rebooting. every time. wow, that must be some intricate h/w-dependent rocket science stuff they’re working, eh? and sometimes the answer is yes. but i’ll let you in on a secret. not usually. usually it’s logic that’s got almost nothing to do with the peculiarities of their h/w target. i’m not joking. http and ftp servers, things like scheduling — *calendar* scheduling, not multi-task scheduling.

before we all pile on, tho, maybe we better check ourselves on this.

  • do you have to bounce a server or a service inside a container to see your results? do you have to switch to a browser? do you have to login and step through three pages to get to the right place?
  • do you have to manually or automatedly clear the cache or wipe the database? do you have to go look up stuff in a special place?
  • do you have to edit a config file, or just quick like a bunny bounce to the shell and type in those same 87 characters again (except for the three in the middle)?


and checking the results. do you do that in the browser, too? or maybe you study the log output. or, again, bounce to the shell and tweak 3 characters of the 87. of course, maybe you don’t check the results at all, one does see that from time to time. “it ran, so it worked.”

if i don’t know whether my change did what i wanted in 10 seconds i get grouchy. sometimes i suffer it for an hour, cuz the change i’m making is the one that will free me to get the faster feedback. but generally, long wait-states between change & results turn me into Bad GeePaw.

the answer, nearly every time, is to use one of several techniques to move your code-you-want-to-test from a place where that takes time to a place where that’s instant. sometimes doing this is a heavy investment the first time. most non-tdd’ed code is structured in one of several ways that make this difficult. demeter wildness, intermixing awkward collaboration with basic business logic, god objects, dead guy platonic forms, over-generalization, primitive obsession, all of these work against readily pulling your code apart for testing it.

the trick is always one step at a time. remember the conversation the other day about supplier and supplied? start there.

you’re either adding a new function or changing an old one, yeah? if you’re greenfield in your tiny local function’s context, try this: put it in a new object that only does that tiny thing. it sits by itself. declare one in a test and call it. lightning speed.

if you’re brownfield, you can do the same thing, but it’s harder. the key is to first extract just exactly as much of the brown field code you need to its own method. then generally pass it arguments rather than using fields. now rework it for supplier/supplied changes. finally, once again, pull it to a new class. note: you don’t have to change the existing API to do this. that API can new the object and call it, or if it’s of broad use, new it in the calling site’s constructor.

the hardest problem you’ll meet: high fan-out methods or classes. these are classes that depend directly on every other package in your universe. for these, well, don’t start there. 🙂

legacy rework is not for sissy’s.

TDD: Resist Integration Tests

the expense of an integration test can be extremely high. consider the contentment app. this app makes drawings 1) that  distribute across time, as if they were being drawn live in front of you, 2) that are generated stochastically, 3) with a “pixel-inaccessible” framework.

now, it’s important to understand that none of these problems are insurmountable. before you tell me how you’d surmount them, let me tell you how i could. 1) screw time. rig it so it draws as fast as possible and wait for the end to assert. 2) screw stocastic, rig your prng so that you control the number sequence. 3) screw pixel-inaccessible, treat the medium like h/w and put a tee in front of it, assert against the tee.

all eminently doable, at some expense. so should it be done?

i prattle on about the money premise, and i want to be sure you understand how it applies here. please do attend, as this is absolutely *central* to learning how to make TDD work for you.

the money premise says we’re in this to “ship more value faster”.

reminder: “value” could mean any number of things, including fewer bugs, better runtime, or more cases or features. all that matters about the definition of “value” for this purpose is that it is dependent on us changing even modestly complex logic in code.

suppose you surmounted the three difficulties above. what would you have to assert against at the end? depending on your technique for trapping the output, you’d have either an ASCII log with a bunch of labelled doubles in it, or a literal image snapshot. you could either eyeball a given run and say “that will do, don’t change this caught behavior,” which we call “bless”, or you could figure out approximate values for those doubles in advance and assert against them, which we call “predict”.

either way, you will have spent a great deal of money getting those assertions, yes? the three surmounted challenges are not cheap, and tho blessing is cheaper than predicting — we’re talking about a lot of data here — neither of those is very cheap either.

what do the literal assertions you write actually assert conceptually about the contentment app?

that, say, given a blank screen, a specific sequence from the prng, instructions to human-draw over time a line from one corner to the other, at the end there is indeed a line from approximately one corner to approximately the other.

wow. you haven’t proven very much. a real script involves dozens of such lines, as well as text. a real script uses a real prng. a real script is inherently stochastic beyond your control because it depends on the timing of the not-owned-by-you multi-threading in the rendering. (aside, not to mention the odds are good that your test is quite fragile, and will break when you change things in the code that do not actually matter to the user.) i could prove all the things you proved without any rig at all. i could write the script that does that and run it and look, in a fraction of the time, and know just as much as that automated integrated test tells me.

in order to get very high confidence that my code could be safely changed, i would need *thousands* of these extremely expensive tests, and thousands of hours of predicting or blessing work besides.

now, your app may not be graphical like mine, it may not be performing its main function distributed across time like mine, and it may not be stochastic like mine. or. well. maybe it *is* like mine. if you’re working database to browser in a huge sprawling database with thousands of users and hundreds of screens and complex workflow and the kind of bizarre business logic enterprise apps demand, maybe it is like mine.

writing an integration test for it would be investing a very great deal in proving a fraction of a fraction of a fraction of the program’s viability. selenium, a test server, the slow runtimes of the intranet, the golden master db, and most particularly the effectively infinite cyclomatic complexity of the app as seen entirely from the outside.


don’t do that.

put simply, the money premise says that we do TDD because we want more value faster. integration tests in most complex apps do not provide more value faster. as a direct result, in TDD we write very few integration tests, and suggest them very rarely.

My TDD Isn’t Judging Your Responsibility

An old blog of mine, We’re In This For The Money, recently got some attention, delighting me, of course. A full h/t to Jim Speaker @jspeaker for that! He quoted me thus:

Too busy to #TDD?
Too busy to pair?
Too busy to #refactor?
Too busy to micro-test?

I call bullshit. My buddy Erik Meade calls it Stupid Busy, and I think that nails it pretty well. All you’re really saying is that you’re too busy to go faster.


Most of the responses were RT’s and likes, both lending me fresh energy. A modest number of folks chatted with me, and that gives even more juice. (Readers please note: the sharers of these things you enjoy need to hear that you enjoy them, early and often. It is what gives us the will to share.)

I Believe What I Said…

Since that article, from 2014, I’ve reworked the ideas in it many times. I have developed theory to back it up, experiments by which someone can prove it to themselves, and a great deal of teaching material. I have written and spoken repeatedly on the topic of shipping more value faster. It’s part of almost everything I write about the modern software development synthesis. About the best recent statement, though I am still formulating new takes on it all the time, is in the video (w/transcript) Five Underplayed Premises Of TDD.

If you want to ship more value faster, you might want to consider TDD, as it will very likely help you do that, for nearly any definition of value you wish, provided only that the value involves writing logically branching code. I genuinely believe that the modern synthesis is really the fastest way to ship value that we have at this time.

…But I Don’t Agree With Everyone Who Believes What I Said

And yet. I can easily see how one might infer from that quote — even from that whole blog, or from other parts of my output — that I am suggesting that using the full modern synthesis is a responsibility for a professional developer.  Or to reverse the sense, that I am saying it is irresponsible to be a working programmer who works in another way.

Some who responded took me as saying that and opposed that idea, others took me as saying that and favored that idea. I’ve no idea what the RT’s and likes were inferring.

I am not saying that it is irresponsible of a working programmer to not do TDD et al, because I do not believe that.

Frankly, the idea gives me the willies. So I feel it’s incumbent on me to give you some detail about what I would actually say about this question of responsibility.

(Writers don’t get to decide what readers make of their ideas. You’ll take the money premise and all the rest of my output you decide to bother with, and you’ll make of it what you will. All the same, if can see how you might think I was saying that, and if I disagree strongly with that, I feel my own — normal, individual, human — sense of responsibility to do what I can to make my ideas more transparent.)

Near-Universal, Individual, And Fraught With Internal Conflict

Nearly everyone develops a sense of what they are obliged, by their own description of who and what they are, to do in this world. We normally call this the sense of responsibility. For most of us, the twin pillars of that sense are 1) our deep beliefs about what is morally right or wrong, 2) our deep connections in the many networks of humans in which we find community.

Those senses of responsibility are rich, vague, staggeringly complex, and full of outliers, odd twists, preferences, and, when put in to language,  startling contradictions.

Above all, they are individual. There are aggregate and statistical trends, of course, and if one could find an N-dimensional space big enough to hold the variables, one would see bell-curve like manifolds all through it. I have a sense of responsibility, and the odds are overwhelming that you do, too. Our respective responsibility-senses may appear to be similar, even highly similar. But they are not. It only appears that way because of the limits of language and the locality of the context we use to frame them. Our responsibility-senses are different because we are different.

If you read widely, you will have seen hundreds, maybe thousands, of situations in which one responsibility a person senses is pitted against another responsibility that same person senses. If you don’t read at all but you are old and thoughtful enough to seriously review your past, you will also have seen hundreds, maybe thousands, of situations like this in your own personal history, and that of others you know well.

Cutting To The Chase

Boy oh boy was I ever about to sprawl out and produce a nine-volume treatise on ethics and responsibility. From my simple dog-like goodness, I will spare us all. Instead, I will sum up my beliefs:

  • that the modern synthesis is the best current way we have of shipping more value faster, whenever that value depends on logically branching code with lifetime longer than half a day.
  • that I don’t honor my own sense of responsibility at all times, because I am weak or foolish, or because I carry conflicting senses simultaneously.
  • that it is not for me to say what is responsible or irresponsible — global words — about any local context involving people who are strangers to me.
  • that “doing your job well” is very often not dependent on shipping more value faster.
  • that “doing your job well” can mean many things that are not good for you or the company you work for.
  • that “doing your job well” is in any case not particularly more important to me than any number of other responsibilities I carry.

If you are a working programmer, you are a grown-up.

You are in the difficult position of balancing dozens of competing needs, using only your heart and body and mind, and I am well aware that this is a task far more demanding in real life than it seems, on paper, in a blog, or spoken by an idol. I won’t do that for you because I can’t do that for you. I only sometimes manage to do it for me.

In One (Long) Sentence

Please don’t take me, even in a crabby mood, as someone who assesses your sense of responsibility, commitment, or heaven forbid, morality, on the basis of whether or not you do TDD or any other part of the modern synthesis in your job.


Use Supplier, Supplied, Or Both?

a coding pattern; replace supplied data with a supplier or a supplier with the supplied data.

it is very common to create code with an interface like this: do( Data supplied ) . we then use the data somehow to perform our functionality, whatever a do(…) method does.

on the other hand, sometimes we create code with an interface like this: do( DataSource supplier ) , and its body is basically the same as our starting chunk, but with a prolog that asks the DataSource for the Data and *then* performs whatever do(…) does.

so when we sketch a new function, we always have to mull over which of these two is the right thing to do. and when we refactor, similarly, we have to wonder which of these two ways to go. there is not a “right” choice you should always make. sometimes the supplied is the way to go, sometimes the supplier. but there *is* a right thing you should do, and that is to pay attention to the decision, as it has a variety of potential consequences.

here are some notions to mull over that might help you make your decision.

first, consider that you can do both. you can write  do( Data supplied )  and do( DataSource supplier ) . i’ve assumed overloading, but you don’t need it, just change one of the names.) just have the supplier version contain the prologue we discussed and chain into the supplied version. note: please chain. don’t just copy the body of one into the other. doing that is purposefully injecting anti-change in to your codebase.

second, if you don’t do both, make sure that whichever one you do is the least awkward collaboration for you to test with. remember “awkwardness”? this is any property of a collaboration that makes it annoying for us to test whatever branching logic do(…) performs.

in java, for instance, streams can come from anywhere including files, but files can *only* come from a drive. if i have to write and maintain a bunch of files “somewhere else” to do my testing, well, that’s awkward. and when we have awkward tests, we don’t write/run/debug them.

third, regardless of picking one or both, is the object you’re passing in far fatter in interface than what do(…) needs to do its work? you might want to wonder if you need an intermediate object, one that is neither the supplier nor the supply, quite, but is a thin wrapper.

the business of passing around obese interfaces is extremely common in “it rubs the database on its browser” code, it dramatically increases the mental scope required to grasp and change the method. a typical case: passing a database connection (and an account id) when do(…) really just needs an account’s zipcode. you do understand that your database connection aims at a turing complete system, yes? it can do *anything* a computer can do. this issue isn’t just limited to the supplier choice. we do the same thing with supplied’s. again, you just want the zipcode. an account, whether databased or in RAM, has *dozens* of fields and operations.

you might think introducing an intermediate object is dumb, because even tho you *could* perform any of an account’s operations, you’re only really going to ask for zipcode, so why go to the trouble?

(ahhhhhh. here we strike at the very heart of the strictly technical aspects of the modern software development synthesis. change-enabling code is nearly always more valuable than shipping the next change. so begins another muse.)

the short answer is still twofold.

  1. cuz code that only gets the zipcode *must* be dead simple, and dead simple makes us go faster.
  2. when we realize we also need a) a state and b) a method to validate that they’re correct, we have a ready place to put it.

so. when you sketch a method or approach refactoring that method, consider your choices around passing supplier vs supplied vs both. consider both, consider awkwardness, consider interface width. there’s no right answer every time, but it’s almost always worth taking a minute and mulling it over.

go geek out, then.

Methods Don’t Create Health

maybe the real first wrong was the slide into “method” in the first place.

have you ever known yourself to behave in a way that is generally perceived as neutral or even positive, but in a way that is actually doing you or others harm?

Depressive Solitaire

my work makes me think a lot. and i quite often do that thinking on a low boil in the background, as i’m doing something else, most often for me playing a game. a lot of folks do this with solitaire, for instance. there is nothing wrong with solitaire, and there is a lot right with letting your mind work on something by looking away from that something. playing solitaire, or taking a walk, or what-have-you, generally neutral or even positive.

but every time i play a game, i am not thinking. sometimes i’m just playing my game and enjoying it. other times i’m doing something else altogether, something “not good”. the “not good” thing i’ve been known to do is just sit there neither enjoying the game nor doing background development, but rather, just completely shutting down while i go through the motions.

a certain amount of that seems unavoidable, for me and for us. after all, humans do background development work that isn’t detectable by us, and we don’t always know what’s going on. and sometimes shutting things down is the only healthy response, too. over-stimulation isn’t good for one, either.

but — i was a practicing hardcore depressive for over 30 years — when i am shutting it all down, it’s very often a warning sign. and if i shut it down for days and days and days, it’s way more than that. in my case, and i know i’m not alone, it’s life-threatening.

What Makes Some Processes Depressive?

what does that have to do with my opening salvo?

these cases: “just playing”, “background processing”, “healthy withdrawal”, and “depressive/OCD”. what makes one different from the other?

what makes one different from the other is the human. that is, me. it’s inside me. it is somewhat visible to me, a little visible to those who know my patterns well, and virtually invisible to outsiders, or at least invisible until dangerously late in the game.

i’m hoping you’re still with me, that you can relate to some or all of that long digression, because now i want to jump back to the question of method.

Back To The Method

have you even seen a happy team doing method X, for values of X like Scrum or XP or DaD or Prince or SaFe or — good lord, we don’t have enough characters for all of these brands?

i have. most definitely.

pick that X where you’ve seen happy healthy productive teams. i’ll just pick Scrum, since it’s so widespread and familiar to many of you. but you pick one where *you’ve* seen the good stuff.

now have you ever seen an unhealthy unhappy unproductive team doing that same method X?

i have. most definitely.

so what was the difference, between the rockin’ team and the despairin’ one? it wasn’t their method. (btw, if you’ve been around the block enough, you’ll be able to apply this to nearly any technique, whether it’s bundled into a brandname or not. i have seen test-first teams that were in great pain, for instance.)

well. what was different was the people. just as what is different from the neutral or positive game-playing and the depressive game-playing was the person. and when i say “the people,” i don’t mean the particular combo of individuals. that is, in the game-playing, the healthy and the unhealthy are both being carried out by the same me. (by the same token, i worked with that TDD-in-pain team for a while, and we turned TDD back into a lovely healthy thing for them. same people.)

same people, different state.

different state can be variously translated: different spirit, or mood, or culture, or any number of other words. suffice to say i am talking about things that are vague and inchoate but still real, internal to the team and internal to the members of the team. (i have seen many teams depressed to the point of life-threatening. it is a normal part of my work.)

Process Can’t Create Health

and here’s my point — long awaited even by me, and i’m the one doing the typing here — process, structure, method, technique, can inhibit health or it can permit health, but it can’t *create* health.

what we were after 20 years ago was health. the structures, methods, techniques all around us were actively inhibiting that health, so we sought another way.

now, that way we sought, don’t mistake me here, involved structure & method & technique, no question about it. partly because we’re geeks and we groove on that kind of shit. partly because the old stuff would not make way if we didn’t offer some new stuff.

Abandon “Process Creates Health”

the first of the four points of the agile manifesto says we value people over process. for “process”, read structure, method, technique.

when i play my game depressively, i am honoring process over people. when a team scrums depressively, or dads or safes or xps or tdds depressively, they are valuing process over people.

no part of this movement’s origin was based in the idea that “process creates health”. but everywhere we look, there are people claiming that it does and wearing our colors.

i wish to disassociate our movement with that idea that process creates health.

if you’re an “agilist”, of any stripe, please help me do this.

Coaching: How I Get Them To Do What I Want

a little more on coaching theory today…

folks ask me a lot of questions whose fundamental focus could be expressed as “how do you get them to do what you want them to do?”

so. here goes. i hereby reveal my entire competitive advantage as a professional coach, by telling you my special secret for how i get them to do what i want them to do.

i don’t. i don’t get them to do what i want them to do. i get them to do what they want to do themselves.

that sounds weird, maybe a little zen-like, i spoze. “what is the sound of the one hand getting them to do what it wants them to do?” but i don’t really mean anything very mystical by it. it connects back to the always-small-always-improve thing we’ve talked about before.

a common way to express these problems is to talk about horses and water, “you can lead a horse to water but you can’t make him drink.” (tickled memory: the algonquin round table wits use to play “use this word in a sentence” as one of their punning games. i think it was dorothy parker who used “horticulture” in a sentence this way. “You can lead a horticulture, but you can’t make her think.”)

here’s the thing about horses and water. horses *like* water. they drink it all the time. they drink it because they’re thirsty, and because it tastes or smells good, because it cools them, and it heals them, and prolly other reasons i don’t even know. when a horse *doesn’t* drink water, what does that mean? well, it could mean a bunch of things, some pretty scary, but i could collapse those into one vague hand-wave: it isn’t drinking this water just now because it does not want this water just now.

well, if i only get horses to drink when they want to, how do i even do my job at all? i mean, the whole point of getting hired to be a coach is to get horses to drink the software development modern synthesis water.

i do my job by hanging around and/or pitching in, with really good water right in my hands in just the right portions with just the right flavor. when the people i’m with are thirsty, i give them water.

and the more i manage to pull this off, the more they come to expect me to be the kinda person who gives out good water. some of them come very quickly not only to enjoy my water but to want to know where i keep getting it from, so they can get it when i’m *not* standing around.

as water-bearer’s, software development coaches *do* have certain advantages.

  • first, the water these folks have right now is insufficient or gross-tasting or both. i do know folks who work w/high-functioning teams in environments that crackle with energy & hope and embrace both learning & change with vigor. but that’s actually pretty rare in the trade.
  • second, we are usually invited in for the express purpose of coaching. they don’t know how to do it themselves, someone thinks it needs doing, and they ask us to come help.
    note: not *everyone* in an org thinks it needs doing. 🙂 but usually the someone who hires me packs some weight, and my victims have the general idea that they’re supposed to attend to me a little. as it happens, i turn out to not be just another boss-y guy, and i turn out to be occasionally useful, so i get the *good* part of being hired by a heavy without the *bad* part. by and large.
  • third, as coaches we are not responsible for all the things these teams are held responsible for, which gives us lots of time to work on preparing water and watching for thirst.
  • fourth, we generally have far broader experience in water-procurement than our teams do. i know a lot of ways to get water. i know a lot of different ways to flavor water. i know a lot of kinds of thirsty.

so, let’s back away from this water metaphor thing for a minute.

advice: develop a catalog of “better-than-this” changes. the job is not to install the one best way to make software. (there is no one best way, but that’s for another day.) the job is to make things *better*. and even a little better is better.

advice: focus in the beginning on small owwies that they agree are owwies. it’s both doing your job, making things better, and it’s generating “capital” you can use later for larger owwies or owwies they don’t yet know they have.

advice: *collaborate* *with* *them*. “coach” is not a synonym for “boss” or “teacher” or even “expert”. work on your collaboration skills more than any other thing.

advice: be kind to the victims and kind to yourself. forgive both parties as often as you can. neither you nor they can always be open, strong, decent, right, or fast. in the AA rooms, we say, “time takes time”. patience is one of your most important resources.

advice: look out especially for the times when they want to do something you also want them to do, and be ready with a small concrete step that will pay them back quickly.

anyway, that’s what i got. how do i get them to do what i want them to do? i don’t. i get them to do what they want to do.