Musing: Five Underplayed TDD Premises

(For an update, with video and transcript: try here.)

here are five underplayed premises of TDD.

why “underplayed”? well, they’re there. hardcore TDD’ers model them all the time. but it feels like they just don’t get the camera time. i want TDD coaches and teachers to step back from “what’s an assert” and rigid rule systems, and highlight these premises in their own work.

the money premise: we are in this for the money. that’s a funny way to say it, on purpose, but here’s what i mean.

i use TDD for one reason and one reason only: to move features faster. more faster features is my job. TDD does that. in particular, here are some substitute reasons to do TDD that aren’t nearly as spot-on. i don’t TDD for art. i don’t TDD for intellectual purity. i don’t TDD for morality or good citizenship, or even quality.

when i write tests, i do it because writing tests lets me make more features faster. that’s the only reason. when i don’t write them, it’s cause writing them — for some case — does *not* make more features faster.

the judgment premise: there’s no computable algorithm for rolling code well. geeks doing TDD use individual human judgment, all the time. we use our judgment to decide when to write a test, or not. when to write a faster one, or not. how to ‘joint’ a problem testably, and so onflowcharts of TDD are training wheels at best. they leave out parts of what i do because they *must* — i’m a TDD’er, not a TDD algorithm. we are permanently, absolutely, irremediably, — and happily — dependent on your individual human judgment as a TDD’er.

the chain premise: the best way to test a chain is to test it link by link. this premise underlies our huge preference for testing very tiny subsystems of our larger app. if a function involves a chain of objects, A -> B -> C -> D, and we satisfy ourselves, Aworks if B works. B works if C works, and so on, we have come *very* *close* to satisfying ourselves that ABCD works.

i can hear the yabbits blowing in the warm spring breeze. please read carefully: very close. why is very close so good? because single-link tests are orders of magnitude easier to read, scan, write, and run than larger-scale. and remember: i’m in this for the money. those cycles i don’t spend reading et al large-scale tests are cycles i use to move more features.

the chain premise is about a preference, not a rule. i often test subsystems, or even through subsystems, but the chain premise pushes me towards the smallest and lightest subsystems that will prove the point.

the correlation premise: the internal quality of my code correlates directly with the productivity of my work. we could say a lot about external quality (EQ) vs internal, (IQ), but i’ll shorthand it here. EQ is things detectable by an end user. IQ is things detectable only by a programmer with the source.

the correlation premise is hard for some folks cuz they confuse those, EQ and IQ. here’s the thing, you *can* trade EQ for more features faster. you can. if my customer doesn’t care if it’s slow, i’ll finish faster. if she doesn’t care that it’s ugly, i’ll finish faster, doesn’t care that the program’s rules don’t map perfectly onto the domain’s, same.

consider the polynomial of my daily productivity. two big terms are 1) my skill, and 2) the domain’s complexity. if i hold my skill and domain complexity, what’s the next biggest term, the one that dominates my production function? it’s the quality of the material i start with. this is the correlation premise doing its work.

finally, the driving premise. tests and testability are first-class participants in design. when we sketch designs, we consider many 1st-class factors. the two non-TDDer’s and noobs focus on: ‘will it work’ and ‘will it be fast’. but the driving premise says the third first-class question is ‘will it test’.

tests & testability considerations *shape* our code. the biggest block i’ve seen in young’uns getting to TDD is their unwillingness to change designs to make them readily testable. the driving premise says we have to. there’s a tangent i’ll offer, maybe later today, on soft-TDD vs hard-TDD, but either way, we have to.

so those five premises underlay almost every move a TDD’er makes. and this is all pretty flavor-agnostic. it cuts across schools pretty well.

we’re in this for the money. we rely on human judgment. we tests chains link by link. we keep IQ high. we shape design w/testability.

my coaching and teaching friends, if you’re out there doing your thing today, please talk to folks about the premises you’re using. we absolutely have to get past this thing of creating faux-systems of rules, and in to transferring the heart of TDD.

When I Need to Not Pair

so, a friend asked me to say more about “not pairing”. as so often, it triggered me to muse.

sometimes i *need* to not pair.

now, don’t get me wrong, i love pairing. i love it for three reasons.

  1. it makes me a better geek. that is, i learn from pairing.
  2. pairing makes two geeks more productive than if they solo’d. that is a pair writes mo’ better code than two solos.

but there are times when i need to not pair. what are those times like?

well, first thing is, they tend to be less than an hour of keyboard time.

then there seems to be something about the circumference of me and my pair’s mutual experience. what i mean is that the problem, a coding or debugging problem, is right out at the boundary of anything either one of us have experienced.

the third thing is that it usually involves a situation where we’ve exhausted all the “dumb ideas” about how to proceed.

the fourth is that it usually involves me wanting to pursue a lengthy, i.e. >5 minute experiment. that is, i need to *drive* for more than 5 minutes in a single focused direction to come up with a next effort. that driving might be code, it might be surfing, it might be in the debugger, but by pairing standards it will be a long drive.

and finally, it never happens unless i’m also feeling personally frustrated with our progress.

so that’s my muse. sometimes i need to not pair for up to an hour. i tell my pair. we split up and rendezvous later.

@GeePawHill on Pedagogy In The Geek Trades

i find 4 major failings in both how & what we teach in geekery.

  1. we mostly don’t. that is, actual teaching of actual geekery-for-a-living is almost non-existent.
  2. we suffer in attitude & content & structure from the trivially flawed “information transfer” view of what teaching & learning is.
  3. we purport to more knowledge than we actually have, teaching crude guesses as if they were writ, and aphorisms as if they were Euclid.
  4. we withdraw the field, abrogating our responsibility and leaving it occupied by marketeering hucksters and know-nothings.

in the interests of rigor i could now stall with an elaborate causal analysis, but i’m gonna cut to the chase on this and keep moving. it’s ultimately happening because we are a *very* young trade that has had forty years of epic levels of insane demand for our output. i see more and more that this is the root of nearly all that befalls us in the geek trades: the seemingly insatiable demand for output.

i’d call out the “agile” movement, but truth is it’s everywhere in geekery. agilists didn’t invent and likely aren’t even primary in this. so i survey this field, and i see these things that i think must be Very Bad[tm]. i wonder what i might do about it all?

so? we can’t make people not want software. in fact, we don’t even want to make people not want software.

but what *can* we do, then? or, to reframe so i am less lecture-y-to-you, what can *i* do?

what i can do is, first, choose very modest projects or steps in the face of the radical uncertainty surrounding the whole enterprise. i especially like that cuz it’s what i do as a geek anyway. it’s just that i’m doing it in an area — education — where i don’t usually.

second, i can make those projects as fundamentally open and free as i know how to make them and still feed my fat little self & the geekids. in particular, i want to not be in it for the money and i want to *prove* i’m not in it for the money. a thing that means to me that it won’t, yet, to others, is that every project should have 100% open books. (i first learned of open accounting years ago from a book called “Honest Business”. the associated memes have changed, but not the guts.)

third, i can both constrain and expand my efforts with a focus i’ve never explicitly taken before: the daily life of expert geeks. i’ve always taught “information”. i’ve taught theory and technique, i’ve formulated “rules”. i’ve generalized & idealized many topics. i’ve quested for hierarchical knowledge structures and generally worked top down. i’ve overvalued theory.

and one thing i’ve really undervalued, too. and that is the fact that i am a me, and you are a you. plato, may he burn in a perfect geometric hell for all eternity, launched this great lurching horror of mythical not-personal “knowledge”. i have come very slowly to see this over many years. an overweening fondness for beige, “impersonal”, psuedo-scientific, psuedo-objective, psuedo-knowledge seems a particular enemy.

it’s just as much an enemy as the much more easily ridiculed hucksterism, terrierdom, and corporate sloganeering that seem so visible. let me put it this way. le geek, c’est moi. & it’s not isolable from the moi that farts, gets mad, does dumb things, and giggles at himself.

these are all things i have actually known for a long time, some longer than others, and steadily increasing as a presence in my output.

i’ve had many mentors. and i want to be very clear: every single one was a goof, *openly*, a human with all the twists that entails. moments of greatness, meanness, silliness, distractedness, and — we’re talking about mentoring *me* now — remarkable patience.

but i hereby go on record. that is, i’m telling you all this so i can hear i said it and i can openly embarrass myself in trying to do it.

and *that* is who i want to be and what i want to do.

Done With “Technical Debt”

i am done with “technical debt” as a useful concept.

as so often happens, an appealing idea happened, one with useful connections to the concrete, and we took it and ran with it. unfortunately, that idea has more appeal than it does decision-making value.

folks justify the most ridiculous nonsense and put it under the heading of “technical debt”. in particular, they write shitty code, i mean *terrible* code. they do this in the name of productivity. the claim is they are accruing technical debt, and that it’s just debt, and that they’re borrowing speed from the future to pay for it now.

here are some problems with that claim.

first, it simply isn’t the case that shitty code now makes more features faster now. that is, you can not go faster by writing shitty code. so right from the get-go the metaphor breaks down.

if i borrow Y money it is to get X now. i borrow it, i get X now, and I pay back Y + Z. would i borrow Y if i didn’t *get* X now? ummmm, no. so first, you don’t get the thing you’re borrowing for. that’s one issue.

second, debt in general, as my friend @YvesHanoulle just pointed out, is one of those “i’m good at it” things. people think they know all about debt. that, after all, is the appeal of the metaphor, right?

do look around you, tho. would any reasonable person say that the world is full of people who are good at debt? the idea that “i’m good at debt” is far more widespread than the reality of people being good at it.

so, too, with technical debt. the idea that “i am good figuring out costs+benefits for technical debt projection” is far more widespread than the reality. so second, technical debt’s appeal is actually based on a false premise, that most of us understand financial debt and manage it well.

third, unlike financial debt, there are no tables charts and formulae to help us parse our technical debt. there are no *numbers* associated with technical debt. no numbers means no math. no math means analysis based on intuition.

and we’ve just pointed out in flaw #2 that — even *with* numbers — skill at parsing debt calculations is actually quite rare. financial debt success — bounded by math — is rare. how much more rare is technical debt success — bounded by guess and supposition? so the third flaw: the non-mathematical nature of technical debt makes the successful management of it even *less* likely.

let me enhance your terror a little, too, by pointing out that the technical debt decision makers don’t even have technical credentials. how likely is it that a middle-level manager who lives in meetings, powerpoint, and outlook, can assess the cost/benefit of shitty code?

so. for me, technical debt is just out the door. it fails on at least three points to be a compelling or useful metaphor.

i should back up a little, tho. there is *something* there, and we must consider it.

my friend @WardCunningham cut a video years back about technical debt, in which he explained he never meant for anyone to write shitty code. what it seems he was talking about when he first spoke of technical debt wasn’t the kind of awful crap we see today at all.

rather, it was about deferred design decisions. in the hardcore XP mode we work largely without a big overriding design model. this notion is reflected in the idea of working by stories. we solve *just* the problem in front of us. we do not solve ahead. we do not even specify ahead. we take our challenge one story at a time, work that story to completion, and move on. this means that we are always, permanently, deferring the idea of “the once and for all best design for our finished app”.

this idea, the “permanent revolution” if you will, lies at the heart of the modern software development synthesis.

we have been slow and awkward in bringing this notion to the fore. this is the source of many flaws in our current attempts to do so. think about the story wars, the gradual rising attack on capacity-box planning, the usage of “technical debt” to justify shite code. all of this muddle comes from our failure to grasp and explicate the permanent revolution and its related concepts.

i’ll take a breather there. thanks for letting me muse!

How I Don’t Apply XP, or Scrum, or Anything

these wrangles over system seem mis-focused. moreover, they seem part of the surface of the elephant i’ve been trying to describe. a system is inherently an abstraction. it compresses, filters, selects, features from an experienced reality. we formulate systems for at least 3 reasons.

first, so we can establish commonality. that is, we can use one system to describe a bunch of “different” local realities. we can say, “yes, that’s python and the web, that’s c and the pacemaker, but with a little abstraction they are the same.”

second, we use systems to “reason downwards”. up from local maps to system, reason about system, down from system maps to local. we use this particularly when we encounter some new local reality and we need help knowing what to do.

and third, of course, we do this because we are humans and we can’t not do it. one could no more stop abstracting local realities into systems than one could stop extracting local oxygen into lungs.

but there are risks associated with an over-emphasis on this system-making system-applying behavior.

the elephant i’m feeling about trying to describe is there, in the systematizing. i just overheard someone asking a (perfectly good) question about how one would apply Scrum in a given setting. i see people asking these questions all the time. it could be about any scheme for getting folks to make code together successfully.

i never apply scrum. or xp. or kanban. i never apply anything.

and we can readily convince ourselves that it’s just a simple misuse of language. they just *mean* “how can i help in this situation?” but at some point, one begins to suspect that the simple misuse of language represents a far deeper misunderstanding.

students of perception are familiar with this. the naive model says that we “perceive” the world, like a video recorder. and as you wade into perception, you quickly realize that, if we *do* record our experience, it’s a fairly low-fi recording. so they downgrade the fidelity of the recorder. and the more we investigate, the more we have to downgrade that fidelity.

guess what, tho. at some point we’ve downgraded the recorder’s quality to the point where it’s misleading to call it a recorder.

so with our usage of these systems, is where i’m headed.

the design patterns movement was subverted, corrupted, marketed, generally turned into a money-losing nothingness at least as fast as XP. but one thing i really liked about the intellectual underpinnings of that movement was its steadfast resistance to global systematizing.

every pattern was local. every pattern tried to describe the forces at play, to express the *multiple* equilibria that could be achieved. “Observer”, bless it’s little dependency-reversing heart, is not *better*. Unless, of course, you have a dependency you want to reverse.

the systems, scrum, xp, kanban, lean startup, ad infinitum ad nauseum, all seem to jerk us into betters-ville.

and they do their gunfights in bettersviile. but bettersville isn’t local. it isn’t contextualized. it isn’t faithful to any reality.

when i hit the ground at a site, i don’t apply XP. i help identify heartaches. i help open paths & minds to their resolutions. i take steps. and only a complete fool would suggest that the steps i propose aren’t informed by XP et al, as well as by my personal experience, not to mention the transmitted experience i’ve surely mangled from my mentors and teachers.

but i am not putting in a system any more than a perceiver is a video-recorder.

i never apply scrum. or xp. or kanban. i never apply anything.


Musing: Refactoring Testless Code

refactoring in testless code is hard.

it’s the perfect demonstration of the manglish “agency” of code.

it is simply not possible to change testless code and guarantee you’ve done no damage. it’s one of the most delicate operations geeks do. there are principles, yes. there are tricksy techniques, too. but mostly, there is experience & judgment. the deep trick is to turn every mistake you make into a microtest that would keep it from ever happening again.

a key insight: never debug in legacy when you’re refactoring. the code that’s there *is* the code. it’s been shipping. it *is* right.

in that way, we can gradually pull ourselves out of testless legacy and into gradually increasing confidence. so. that’s all. be of good cheer. you get out of legacy the same way you got into it, one tiny step at a time.

oops. one more thought. noob TDD’ers won’t yet understand this, but the act of retrofitting microtests will reveal the correct design.

that is, a microtestable design *is* a good design, to some arbitrary epsilon. making it microtestable *is* fixing it. as you proceed with non-net-negative endpointless steps, the code will start to scream at you about the major refactorings that are needed.

Shifting Certainties

shifting certainties. this is where i’m headed these days.

without belaboring criticism, what i’m seeing is that we have a trade with a whole stack of roles and humans to fill them, and, of necessity, they have assembled a varied, sometimes compatible sometimes not, set of certainties by which they navigate.

the trouble is that, even when the certainties align with one another, they, ummm, aren’t. that is, they aren’t certainties at all. neither our data nor our experience actually back up most of them.

so for a couple of years i’ve been all certainty-abolishing in my tone. that has worked exactly not at all. because we *need* certainties, accurate or no. to live in perpetual doubt is not a common human capacity.

so now i see that it’s not that i can just abolish the certainties, i have to find replacements for them. alternatives. i want to stop saying “let go of this,” and instead say, “grab hold of that.” that’s what i’m calling shifting certainties.

i have a list of them, partial and likely incorrect, with the “from” on the left and the “to” on the right.

some examples of what i mean…

  • from “motivate geeks” to “avoid de-motivating geeks”. from “transfer information” to “provide experience”.
  • from “argue from theory” to “design experiments”. from “endpoint-centric” to “next-non-net-negative stepping”.
  • from “being right” to “building rich consensus”. from “no mistakes” to “mistakes early and often”.

there are more, but that offers a sampling. it’s all pretty inchoate for now. but in the last few months i’ve come under certain influences. and they are enabling me to — maybe — formulate a model i can explain and demonstrate that puts these certainty-shifts into perspective.

thanks for letting me muse. i’ll doubtless be returning many times to this shit. work-in-progress, don’tcha know.

The First Coaching Days

i can’t over-emphasize for new coaches the importance of rampant opportunism. until you’ve established your miracle powers in a team, you won’t be able to move big levers, only small ones. which small levers will bring you the biggest bang of trust & faith the fastest?

some possible openings: we find a bug that’s an exemplar of a *family* of bugs, and we refactor so it never can occur again. or we have an untestable, if they’ve started TDD’ing, and we change it so it’s now testable. or, rally sucks & exhausts us, so we make a below-the-line/above-the-line, keep only above in rally, and rotate a maintainer/victim role.

very geek-centric, and that’s not by accident. when i’m called in, it’s often from the top or side. i need to gain traction on the floor. you gain traction by being perceived as having already helped.

a key insight: stay mostly out of their hair in the early days. i start every gig by telling the team i realize they work for a living, and that i will be asking for very little until i find my feet and they find some confidence in me.

the “fast talk with mike” is a great technique for this. i ask for two things. 1) a loaded dev box that’s mine, 2) 20 minutes a day.

the rules for fast talk are simple: “shut up and listen and watch”. “mandatory for first 2 weeks”. and “no agenda in advance”. i spend the mornings in their code base, maybe paired, maybe not. i draw lessons, learn what tricks they’re missing, & show them in fast talk. typically, by the end of that 2nd week, i’ve had dozens of sidebars where people say, “show me how you did that.” or “take a look at this.”

it’s all tiny steps, tiny steps. when you’ve taken all the first level of tiny steps, the second level that was too-big looks & feels tiny. so then you start on those. and so on. eventually, if you are actually making life easier, they will ask you to solve *hard* problems.

and you’re on your way. 🙂

a word: sometimes your floor manager wants you to do more faster. this is an opening, too. i say, “i *am* doing more faster, you just aren’t in a position where you can tell.” i’m changing AND modeling how change works AND helping. i am often asked by these folks how they can tell it’s working. i explain, you will know if it’s working, i will show you the indicators.

the key in the short term at the beginning: actually help right now in little ways.

another opening: most of teams already have a standup of sorts. most of those standups are weak and ill-loved. the kernel of the problem: people talk too much. there are several flavors of that.

1) people solve problems in standup. don’t answer any question that takes longer than 2 sentences to resolve.

2) people go around the circle of attendees instead of going around the work in progress. focus on the work.

3) managers tend to be focal points, so they turn into reports-to-managers. ask them to lay low or even not come for a week or two.

4) some players want in on every single conversation. that’s a hard one to crack. easiest: tell them to walk around after and set up times.

fixing standups is such a little thing, but it’s easy and it helps from day one, and that’s what you want, early on.

On One Ring To Rule Them All

i’m thinking of this thing called “justifcation privileging,” or alternatively “explanation monism”. or even, short hand and jokily, “one ring to rule them all.”

one constantly sees tweets, blogs, even books, where someone boils down staggeringly complex and ill-understood processes to one factor. today i saw “people don’t make decisions rationally, they make them emotionally.”

now, set aside for the moment that no one even knows what those words mean other than at some vague gut-check level, even then, it’s just not so fucking simple.

why did i rush in there w/o a test? why did i first write a test? why did i *anything*?

the real, serious, answer, for anything i don’t do 100% of the time, is . . . are you ready? . . . here comes . . . “idunno.”

i can always explain away any action i can take. always. it is a human faculty. but all of those explanations are really post facto. and while some of them may have accurate elements, most of them should be highly suspect. including the ones from introspection. to say we make decisions from “emotion,” or from “reason,” or from “breakfast food,” or from any one thing, is either corrupt or naive.

humans are complex beyond any beggar’s dream. to assume as a general rule any one cause, even any primary cause, is a mug’s game.

or, and this is important, to assume it *implicitly*, *blithely*, as a matter of course. that’s what i’m really aiming at here. of necessity, assumptions are required. we *must* guess at others’ motivation. that’s not optional if we’re to act at all.

but to guess the same thing every time?

every time i don’t test it’s cuz i don’t know the value of testing? it’s cuz emotion? it’s cuz boss wants features? i doubt it.

there’s no one reason we behave. so there’s no one explanation for our behavior.  privileging a single explanation over and over again is a classic noob coach fail. believing it’s always reason X leads one to always reach for remedy Y. always reaching for remedy Y has a name: “noobism”. some folks always reach for a political explanation. others for a technical one, an emotional one, a rational one.

for noob coaches, the most common one ring to rule them all is “knowledge”. if i can just get these guys to *know*, they’ll act differently. but knowledge is far far far from being the basis on which most people act from moment to moment.

to be a better coach, be a better practical tactician, a better listener, a better watcher.

christ, after all these years, i just realized i’ve become an advocate for fox vs hedgehog approaches to coaching.

it’s funny, as i always found Berlin to be very smart and very unreadable.

Why Do We Seek One Ring To Rule Them All?

yesterday i mused about explanation privileging, where one always reaches for one ring to rule them all in their explanations of behavior. this morning i am thinking about the reasons that happens.

don’t be alarmed, i’m not gonna suggest there’s just one reason for it every time it happens, i’m circular, every argument is circular, true enough, but i’m not *that* circular. it takes more than one step. 🙂

one reason it happens is biology. there are huge biological reasons why strategies such as “always think real hard” are contraindicated. in “thinking fast and slow,” we’re given dozens of cases where humans rapid-fire decision-making and get situationally poor answers.

hahneman’s explanation is connected to heidegger’s concept of thrownness via evolution. we are thrown (biologically) into a world where we *must* act, and often enough, we must act *quickly*. the alternative is death. so our brains are, naturally, built with that in mind. explanation-privileging is a shortcut, and we value shortcuts a very great deal.

another reason? i, we, prefer to use the best tool we got. when i have a remedy Y that i am good at, i *want* to use it as often as i can. i’ve a friend who’s a great teacher. good w/words, passion, explanation. she reaches for the “teaching” weapon, tho, first thing every time. it hampers her effectiveness as a coach, in my view. sometimes people don’t *want* to be a student. they want something else.

explanation privileging is also part of the ordinary abstraction-seeking that healthy human minds do. the great value of abstraction is that you have to know less to use it. if i have *one* rule, that’s patently easier than if i have *50*.

the problem with being a fox rather a hedgehog is that you have to constantly be watching every damned thing. easy to see the attraction from there of a monist approach. explanation-privileging dramatically reduces the # of things i must consider.

i notice, as i puzzle this out, i am throwing out lots of “reasonable” factors. but of course, there are tons of unreasonable ones, too. some days, i’m severely hungover, and my explanations of others’ behavior tend in, well, the same shameful despairing direction as my heart. it’s not reasonable of me to think of everyone else as feeling the same as i do. it provides no benefit at all. it just is what happens.

there are a million variants on the unreasonable reasons we privilege explanations. anger. grief. mommy issues. stepped in a puddle, and so on. it’s a mistake, i think, to think of oneself and others as always being reasonable at all. many times i am not. many.

another factor i see is a kind of overweening fondness for intellectual purity. i speak here not of actual value provided, that is, we already have discussed the real merits of abstraction. rather, i’m talking about an unreasonable over-valuing of those merits.

some folks like to have one ring to rule them all at a level far beyond reasonable. they seem *driven* to intellectual purity. desperate, even, is a word i would use. i am always riven with twinned feelings of pity and disgust in such settings. i get very dickish.

so. given all these, and i’m sure more, justifications or explanations or drives towards explanation-privileging, how does one resist?

i say that like i know. i don’t. i *think* i know *sometimes* how *i* resist it in *some* situations. 🙂

maybe later i muse on the tricks i use to avoid monism in my explanations and my remedies, on those occasions i manage to.