Reversible Raincoat Tests

let’s review “reversible raincoat tests.”

sometimes, we build systems in which a downstream collaborator must interface with an upstream one. the two apps are by separate teams, on separate servers, developed at separate times, and still both in development.

a reversible raincoat test is a script with two sides. think in terms of a literal script, like in a play.

“mike: hi mom. mom: hi son. mike: is today tuesday? mom: no, doofus, it’s thursday. mom: gotta go, basement is flooding. mike: okay, bye.”

the idea is a) use these scripts to facilitate the interface discussion & design. b) run those scripts in *both* directions. that is, we need to test that mike does his part correctly, AND we need to test that mom does her part correctly.

so a reversible raincoat test is always a shared resource between two teams. the great value is a) the communication, and b) the ready ability to sanity-check. there is a real knack to supplying the right amount of detail for these, and you’ll have to practice.

one key, tho: avoid stuff like raw html caps. they tend to be far too brittle. and timestamps are also a no-no for the most part.

a key value: make them graspable easily by humans, a la gherkin or other comparable DSL.

if i can’t run *my* side of the script, there’s no point in releasing it outside my team, and the same goes for the other team and theirs. it really does save a lot of time to not do “false starts” on integration tests.

RORA – Runs Once, Run Away

today’s muse, another damned geepawism: RORA technique. RORA means “Runs Once, Run Away”. it is the standard technique in a great many software dev environments.

a developer is tasked with some story. she codes it using a variety of half-assed techniques, including mostly GAK testing. more geepawism: GAK means “geek at keyboard” testing. you know the GAK drill: run the code. look at the output. fire up the debugger. look at the output. bless it. ship it.

RORA includes not just GAK-centric testing, but *all* the things we do whose essential focus is “it ran and it worked, so it must be done.”

where does RORA come from? one must be sure to understand, i’m not ranting here about developer responsibility or oaths or such-like. RORA comes from two facts. 1) geeks don’t know how to geek well when they start. 2) managers don’t actually understand what we do.

first, the weak-geek source. see, in this trade, we throw noobs over the wall to fight the boche on their first day. we do it over and over. as noobs, they’re mostly conversant with the basic language syntax and one compiler variant. no slur on them. they’re *noobs*. so they race through the mud and the barbed wire and the corpses and they charge that story’s machine gun nest. when the shooting stops, they’re dead or the nest is. and they report to us that they did it, and we move another story in front of them.

on the managerial side. ahhhh, it’s harder for me here, as i generally loathe that world. but i will try. there seem to be three sub-issues.

1) managers press for feature-movement cuz *they’re* pressed for it.

2) geeks, eager to please, say they’re done when they’re not.

3) managers don’t have the skillset to actually assess done-ness or stability outside of actually trying the product.

all of this adds up to gigantic pressure to go all RORA on things.

here are some RORA behaviors i’d call out.

soloing in a normally-paired environment leads to RORA. i can’t overstate the anti-RORA force working with a pair represents.

not versioning/staging output is classic RORA behavior in a web-service world. when i’m an upstream team, it’s not done cuz it works. it’s done because downstream can use it. and downstream uses it FOR DEVELOPMENT, which means they need it to be stable, version-tagged, or side-by-side with whatever was there before.

obviously, no automated tests is a very much a RORA-inducer. but i’d go a little further. i need automated tests of a particluar type, which is a muse for another day.

another RORA thing: does it deploy? if it doesn’t deploy, it ain’t done. if it’s upstream and it can’t be pulled, it ain’t done.

in my current environment, this isn’t true everywhere, the services are all pretty large-scale. this leads to nightmares. if your change is to a large-scale webservice, do you have an in-box version ready for your downstream to pre-develop against? RORA.

do you have actual pro-active contact with your downstream? can you auto-announce updates, not a fucking git comment stream, real updates? do the docs change? did you change them? if not, RORA.

i offer code-clients the geepaw guarantee. it does what i said it does. it does *exactly* that. if it doesn’t, call me, 24/7. i’ll fix it. if you’re not able to offer users of your code the geepaw guarantee, you prolly still have RORA aspects to your technique.

GeePawHill On Pairing

a friend asks what to do about a bad pair. that’s a juicy one, and prods me to muse.

why do we pair? it’s one of the techniques we adopt to increase productivity. that’s measured in geekery by insights per hour, or such like.

maybe if we understand what makes good pairing, we can get closer to some possible remedies for bad pairing? good pairing involves a bunch of otherwise disconnected-seeming aspects. these form an interactive context in which the pair operates. if some of those aspects are flawed, seeing the flaws would let us throw out some ideas for making it better.

1) best-pairing happens when we’re physically comfortable. if one or both of us are physically unhappy, let’s figure out how to change that. makes me think of things like monitors, evenly split seating, font size, each side experiencing equal access, visibility, and so on.

2) best-pairing happens when we’re unafraid. two factors here. am i afraid of her, or her me? are either/both afraid of the problem? a key danger: does one of us confuse the code with the self that made it? that’s a huge source of crippling fear. i’ve had many pairs whose self-identification with the code turned them into a beast i shorthand as a penis monster. these folks are so afraid to be wrong-in-the-code or wrong-in-the-pair, they push away every attempt to collaborate.

3) best-pairing happens when at least one person has a clue about how to proceed locally. this is always a shifting judgment call. sometimes i’m with a pair and we’re so clueless we split up to go discover 3-4 dumb ideas that might work. it helps to see that and do it. i stay alert to the times i need to not pair, but still not alert enough, cuz i miss the cues i’m giving me and resist breaking too long.

4) best-pairing happens when we verbalize more-or-less continuously about what we’re doing. ideas-to-words-to-hands is not an inherent skill. not everyone can quite manage it, tho most can be taught some baseline of it. the key is to know when you’re doing it and not, and to accept the occasions when you can’t, and speak to them, too. sometimes i say, “hey sugarlips, gimme like five minutes to jiggle this just so, quietly. i promise we’ll revert if it doesn’t work.”

5) best-pairing happens when we both have either the big picture or the small picture. that is, we need some shared vision at some level. back in the clueless context, i said i stop when we’re both clueless. when we *start* with one of us having the clue and the other not, we go to whiteboard for 2 minutes & move it across there. i might not still get her idea, but i *start* to, and can get the rest in situ.

6) best-pairing happens when we pair promiscuously. i’m tempted to just leave that there, but will push it a little further. i like to rotate at minimum on half-day boundaries. i prefer the king-moves model, where the person who’s been on the task longest leaves. promiscuity in pairing can *really* help “bad pairs” to learn, and i encourage it.

7) best-pairing is highly spontaneous, with lots of unplanned and unplannable micro-interactions. i find techniques for pair-structuring, like driver-navigator or ping-ponging, to be at best learning aids for very junior-at-pairing pairs.

8) best-pairing is fucking intense. it can be quite grueling, emotionally, intellectually, even physically. pay very close to your experience, and watch for the signs you’ll get from you that you need a break. take your own feelings’ advice.

finally, a few words on how i introduce pairing to a team that hasn’t done it.

i don’t. that is, i don’t mention pairing at all. i build a good pairing station and i have folks come visit me there. we work on problems together, that’s all. at my excellent station, that’s all. i rotate that around the team. i’m aiming to create joyful experiences for them of working 2-on-1. i don’t instruct, and i don’t call it pairing.

when i do finally get to it, it’s usually because the team is experiencing the pain of silos. i make a proposal, let’s block out 11 to noon every day for an hour of pairing. we lottery the rotation, and we coin-flip whose problem we work on. i travel that experience around the team a few times before i ever mention pairing.

by floating around watching those interactions i develop a sense of who’s liking it, who’s not — and possible reasons why not. if it goes well, we extend the time. if that goes well, we extend it some more. eventually, we’re pairing at least half of our day. at that point we can drop special times altogether and just declare ourselves a pairing team, where we all seek to pair whenever we can.

the most common flaws of the ones i’ve mentioned above: unknown physical discomfort. for instance, i’m pretty deaf. you can’t tell most of the time, cuz i’m good at faking and lip-reading and what-not. it’s often invisible. but harder to mask when pairing.

second would be the penis monster. this is always a brutal challenge from a coaching point of view. how do i help people not self-identify? mostly i model: i screw up all the time anyway, and i highlight me screwing up, laughing, moving on. i do large reviews & teach them the review motto: “i am not my code.”

above all, i remind myself and the bad pair’s victims: pairing is a *skill*. it must be learned. we all have different skill levels. all over the team, we have masters at X, juniors at Y, and so on. we cope with this in lots of ways, but most of them are attempts to bring the junior’s skills up to the minimum baseline. the same exact thing applies to pairing itself.

if you just can’t understand java, after a long stretch working w/the team pulling you along, we have to part ways. if you just can’t learn pairing, after a long stretch working w/the team pulling you along, we have to part ways. but as w/the java situation, in most of my coaching career, the weak-java or weak-pair gives up on us before we give up on him.

that’s all i got for the moment. feel free to poke/prod/ask whatever followups hit you.

Crabby Note To TDD Journeyfolk

TDD journeyfolk, let me rattle your cage a little this fine afternoon.

lemme sketch a common situation for me. i have a problem, not a small one, to be solved in a tech stack i’m not intimately familiar with. further, some aspects of the problem are things no one on stack overflow has ever done before. they look, on paper, like they might be doable, but there’s no drop-in and very little advice. i’m in this situation where i have to try something — anything — and fuss with it for a while to see if it might work.

let’s call this the desperate-ignorance mode of working. i am desperately ignorant of the code i’m about to try writing.

and here’s the rattle: get off me about writing some semi-end-to-end test in junit that will prove i understand my problem/solution pair.

it is very difficult to write a *good* *preservable* *long-lived* test of a thing whose syntax you don’t understand, and while it would be nice if i only ever had to work in areas where i have confidence that i even know enough to know what i want, i quite often don’t, due to my role in the team.

so take deep breaths, and give a little trust? dogma just does not help me.

i have been TDDing more than many have been aware what a computer was. i am a TDDer. i write microtests, i advocate them, i live by them. i am not going to ship code that doesn’t honor the geepaw guarantee. i don’t do that any more. i go faster with microtests. i choose them.

but it’s dogmatic and foolish to ask me to roll a test for a thing whose most basic mechanism i don’t understand, even from a blackbox view. i don’t write tests because God told me too. i write tests because i know what i want and it gets me to that faster.

if i don’t know what i want, i can’t even sketch a test. when i barely know what i want, i can write a huge bad ugly test expensively, one that i don’t like and will throw out the very second i *understand* what i want.

but don’t get on me about whether i’m a TDDer. i am a TDDer. i won’t let anything stop me from background-working on getting to testability.

i just don’t do that.

“well, let’s write a test.” “ummm. a test for what? a test that this compiles? a test that i will like it?” i don’t work for the tests. the tests work for me.

i already have a test that it compiles, it’s called “the compiler”. and i already have a test that i will like it, it’s called “me”. and they’re very fast, very cheap, and very reliable.

when i know what i want my code to do, i will — believe me — learn how to write a test for that. when i know how to test my code, i will — believe me — write the damned test.

i feel folks still don’t understand that these heavy flickery largescale tests are of almost no use in real TDD. they’re hard as fuck to write, & they tell you nothing a run won’t immediately tell you, when you’re a stranger in a strange land.

if i’m starting on something that will be a net of 20-30 collaborating objects. it’s bad practice to write a test out at the controller. when you’re starting something big and new, the outermost layer of objects is by far the hardest to test and lease likely to be right.

can i write that test later? yes. will i? if it helps. but i’m never gonna start there.

How’m’I Gonna Test This?

i often say “how am i gonna test this?” but — language being what it is — my meaning in asking that is not likely graspable by a noob. to get at what i’m really wondering when i ask this, i may have to take a few asides & detours.

first. why am i writing a test at all? what is the test doing for me?

this touches a mantra you’ll hear me mutter over and over. “i don’t work for you. you work for me.” i tell the code this, and the ‘puter, and the “engineering process”, all the parts of a development system.

the only reason i use a tool is because it is helping me (or when i’m mistaken or guessing). when it isn’t helping me i don’t use it. so what am i getting from these tests i write? what are they doing for me that earns their keep?

several things. but i can label them all with the sort of general label: “closing the mental shoebox”. this launches another tangent..

the code i work on is big. the mind i work with is tiny. in order to do the work at all, i have to bring these two scales into contact. i’m not an essentialist, but just the same, hear me: the essence of programming well is the management of mental scope. i absolutely have to arrange things so that my tiny mind can think about my enormous code through tiny temporary windows onto it.

i think of these as tiny project shoeboxes. i put a label on the box. and i put just the parts of the code i need for that label. in OO, the *class* is the shoebox. the name of the class is the label. the class does Just One Thing[tm]. i take out a shoebox, mess with it, then put it back on the rack and take another. that’s controlling mental scope.

the shoeboxes are hierarchical, too, of course. i have boxes that only contains a little bit of code and a bunch of other shoebox labels. in some apps, of course, those hierarchies are quite tall. arranging the shoeboxes so it works is non-trivial. we call that “architecture”. but there’s no clear line between architecture, design, and the code. it’s just a spectrum from closest-to-metal up to furthest-from-metal. and i go to great lengths — tremendous lengths — to keep those boxes small, simple, and arranged so their contents are at the same level.

so. sliding back from that tangent. the things the tests do for me are all ways to secure the lid on the shoebox. when the lid is closed, i don’t have to think about what’s inside it. i just have to remember the label & the Just One Thing[tm] it invokes.

the first thing those tests do for me: they assure me that the box does what the label says it does. that lets me, when i’m in another box that *uses* this box, not worry that the box does what i said it did. huge benefit mental-scope-wise. that’s one reason the tests make me a more-features-faster geek.

it will probably surprise you to realize this, but i refuse to let you slumber on in dreamland. you *must* accept this: i’m not perfect. that means that sometimes i make boxes, close them with tests, walk whistling away from them, only to later discover i was wrong. clues in the larger world force me to open a shoebox i thot was closed, one i haven’t visited in months.

and here’s a second way the tests are working for me. the tests tell me quickly how the stuff in that box was *spozed* to work. i can scan a list of tests and see right away whether i thot i covered the case in question. if i didn’t, i do so now. if the new test passes as is, well, hell, must not have been that. the joy is when the new test *fails*. with my little failer happily in hand, i can make it green. when i’ve done that, i can put the lid back on and go back to what i was doing.

btw, not only do i make the new one green, i keep all the old ones green, too, and that’s the *third* way the test works for me. those old tests keep me from fail-toggling, where you fix thing 1 and in so doing break thing 2. then fix thing 2 and in so doing break 1. the tests never let me do that without telling me about it.

there are handful of lesser things those tests do for me, too. they give me exempla for usage of the box, for instance. and i don’t work solo, so instructions for usage are important to more than just me.

the tests themselves are actually tiny little shoeboxes, too. that is, one test = one thing to make work. that’s a very tiny focus.

so. when i say, “how am i gonna test this?” i mean, “how am i gonna close this shoebox?”

and there’s a part of this that is central, but not obvious so far: i ask that question *before* i have any code. in fact, i usually start asking it as soon as i start conceiving of the box. the box has API. i build boxes by building API’s i want.

and among the very earliest questions i ponder while i’m doing this: how’m’i gonna test this? (that’s not the only early question by any stretch, but those are for another muse.)

anyway. for me, programming is TDD and TDD is shoebox-closing and shoebox-closing is managing mental scope.

if you want more features faster, this is the way i know how to get that from myself, and it might help you. thanks for nodding along.

The Team And Three Flows

when i think about teams, i think about them with a strange mixture of metaphors.

first i see a thing that is in some respects like one of our classic pictures of an atom. there’s some particles in the middle, and some others that seem somewhat clearly “outside” like the electrons in their clouds. but that metaphor slips a little. in atoms, the electrons & protons & neutrons are separate and separated. in teams, it’s more like a swarm.

so slip that to a flock, instead, or a school of fish. the center is moving, and fish are moving in and out of the center, changing places. the center is a kind of strange attractor, moving through space with its attendant swarm around it, all moving to stay near or far. moving through space? well. no. not space, because space is fairly empty. the space teams move in is fairly full.

having watched a lot of teams, i’ve watched a lot of flows those teams swim in and through. now, i can’t tell you i know everything required for software development excellence. in fact, i’d go so far as to say no one can.

but i do know *three* things required. i see them as flows or currents in the soup through which the team moves. they are nutrient flows. if any one of them is missing, the team dies. if any are attenuated, the team suffers. and every team i’ve seen that was developing software excellently used those three flows at nearly optimal efficiency.

the first flow i call “valued results”. valued results is the chief steering mechanism of the flock. if the strange attractor at its center is moving, it’s moving to the flow of valued results. trivially, valued results are how the team knows it’s winning, how it knows it’s going the “right” direction.

a minor but not remotely ignorable point. it’s not *valuable* results. it’s *valued*. that is, this is a true flow, into and around the team, it’s the *exchange* of output for valuation, not a distant end-result. excellent teams consistently find ways to keep these exchanges going, and there are myriad approaches.

the second flow or current through which the team swims, on which it feeds, is called “geek joy”. i use that word geek a lot. what i mean by it is someone who is highly technical and highly creative. i don’t distinguish by title or role. so when i say geek joy, i’m talking about the flow that provides the team’s *drive*. in the geek trades, we have an incalculable advantage over most others. simply put: geeks love this shit.

i know, and i know you know, many geeks who do it all day long, then come home and do it all evening long on their own projects. your designers, your programmers, your analysts, and others i’m not even thinking of right now, all of these are such geeks.

one never has to motivate geeks. one only has to avoid de-motivating them.

the flow of geek joy in excellent teams is what keeps them constantly leaping ahead, loping across the plains, to mix all metaphors further. i’ve never seen an excellent software development team that wasn’t suffused with geek joy, the sheer wild exuberance of doing THAT THING.

the third flow is the flow called “courageous curiosity”. this flow provides the team with a feeling for where it is at right now. it is the asking and answering of questions. it includes, especially, scary questions, which have hard answers. remember that the strange attractor at the center is itself always in motion. nothing sits still.

courageous curiosity is our only way to know where that attractor is going, what flows it’s consuming, & where individuals stand, relative.

so those are three flows i know are required for excellence in software development. valued results, geek joy, and courageous curiosity.

i’ve seen a lot of teams. i never saw an excellent one that wasn’t drinking deeply from all three flows. i’ve never seen a failing team that wasn’t failing to take advantage of one, two, or all of them.

as i said at the outset, i’m not sure those three flows are “sufficient”. but i am sure they are “necessary”. if your team is failing, or you think it might be, look to the three nutrient flows that keep it healthy.

is one weak? is one altogether absent? start there.

You’re Gonna Be Wrong

dear smart geeks: stop worrying so much about whether you’re gonna get it wrong, you definitely are gonna get it wrong.

how do i approach this? hmmmm. okay, i see a couple of threads that need to be pulled together. this one’s gonna be clunky, i fear. ahhh well. i’m definitely gonna get it wrong, too, i spoze. 🙂

the drive to be right is a powerful one. for some of us, perhaps, too powerful. but it does come naturally. think about when u first started geeking. you were surrounded by all these damned puzzles. and you were confused, and there was syntax, and semantics, and the desire to make it work, and so on. so, you learned. you thot. you had experiments. you debugged. you read. you talked & listened. and you got better, yeah?

problems that seemed very complex became actually quite straightforward. u became a geek. and over time, your specialty, or specialties given enough time, became nearly transparent to you. the problems started even to get kinda boring, truth to tell. so you widened your scope. bigger problems. better puzzles. and it likely went on for a while. new challenges to the mind. new confusions, discussions, experiments, and then — new skills.

the problems got pretty big, and for a long time we just described them as big problems. and it felt like the problems were still solvable “from the start” if you could just think hard enough. and now we get an insight. no. they’re not. they’re not solvable if you just could think hard enough.

now having played to that little point, we have to go backwards a little. it *sounds* like i’m saying the problems are too big. like, oh, beyond a certain size, there’s never any hope. but i’m not saying that. what i’m saying is that we *experienced* the unsolvables at a certain size, but not directly *because* of that size.

rather. what happened is the problems got big enough they began to incorporate elements that prevent “getting it right the first time.” in other words, the size exposed you to those elements. the elements were always there. are always there. they don’t mystically appear at size X. rather, you’ve just begun to incorporate them at whatever your size X happened to be.

those elements form a conceptual category we still have a ton of words for, but no handy guaranteed label. here are some words that gesture towards them: ecological. organic. complex (systems sense), chaotic (math sense).

a metaphor? you’ve been solving ever larger problems by being ever more comfortable with the way the balls move on the pool table. pool balls move the same way every time (to an arbitrary epsilon). u hit it this way, it goes that way. newtonian physics relies on this.

what if your pool balls *don’t* act reliably, stably, predictably, mechanically?

when that happens, and it happens everywhere in actual professional geekery, as opposed to “coding”, well. the game changes quite a bit. these odd pool balls, move further or less than they should, move at angles that defy prediction, are not simple newtonian objects. those balls are the elements of very different problems than the ones you’re used to.

what are these elements? for the moment, let’s call them agents. why agents, cuz they seem to have an agency, a motive force, all their own. you could think of them as people, if you’re old enough and mentally sound enough to understand that other people are subjects, not objects.

think back on all those coding problems. they were hard, no doubt, and it was cool that you solved them. but they didn’t involve (much) agency. here’s the thing about code. code does the same fucking thing every fucking time.

yes, yes, i’m sure i’m to be regaled with weird events that seem like exceptions. c’mon. stop it. von neumann computers are deterministic. give them the same inputs and you get the same outputs, every single time. (aside: places in code where we model non-determinism are in fact fascinatingly difficult. Turns out, making ‘puters stochastic is *hard*.)

anyway, down to the final chase scene. problems that incorporate agents are not solvable the way problems that don’t incorporate them are. and professional geekery problems lasting longer than a week or two, incorporate agency all over the place.

so? a couple of conclusions and i’ll let it go.

  • first, stop beating yourself stupid with the “sit and think to be sure you get everything right” thing.
  • second, stop arguing about what will happen in a year. everything that will happen to your system a year from now is *suffused* w/agents.
  • third, ponder in your copious free time whether approaches to solving agency-laden problems ALSO work well for non-agency problems.
  • fourth, act then look, act then look, act then look. because of the unpredictability of agency, you’ve little other choice.
  • so? don’t worry so much about whether you’re gonna be wrong. you’re definitely gonna be wrong.

the trick isn’t never being wrong. the trick is trying a thing, seeing what’s wrong, and moving to try something else, all quickly.

Five Underplayed Premises of TDD

(For an update, with video and transcript: try here.)

here are five underplayed premises of TDD.

why “underplayed”? well, they’re there. hardcore TDD’ers model them all the time. but it feels like they just don’t get the camera time. i want TDD coaches and teachers to step back from “what’s an assert” and rigid rule systems, and highlight these premises in their own work.

the money premise: we are in this for the money. that’s a funny way to say it, on purpose, but here’s what i mean.

i use TDD for one reason and one reason only: to move features faster. more faster features is my job. TDD does that. in particular, here are some substitute reasons to do TDD that aren’t nearly as spot-on. i don’t TDD for art. i don’t TDD for intellectual purity. i don’t TDD for morality or good citizenship, or even quality.

when i write tests, i do it because writing tests lets me make more features faster. that’s the only reason. when i don’t write them, it’s cause writing them — for some case — does *not* make more features faster.

the judgment premise: there’s no computable algorithm for rolling code well. geeks doing TDD use individual human judgment, all the time. we use our judgment to decide when to write a test, or not. when to write a faster one, or not. how to ‘joint’ a problem testably, and so onflowcharts of TDD are training wheels at best. they leave out parts of what i do because they *must* — i’m a TDD’er, not a TDD algorithm. we are permanently, absolutely, irremediably, — and happily — dependent on your individual human judgment as a TDD’er.

the chain premise: the best way to test a chain is to test it link by link. this premise underlies our huge preference for testing very tiny subsystems of our larger app. if a function involves a chain of objects, A -> B -> C -> D, and we satisfy ourselves, Aworks if B works. B works if C works, and so on, we have come *very* *close* to satisfying ourselves that ABCD works.

i can hear the yabbits blowing in the warm spring breeze. please read carefully: very close. why is very close so good? because single-link tests are orders of magnitude easier to read, scan, write, and run than larger-scale. and remember: i’m in this for the money. those cycles i don’t spend reading et al large-scale tests are cycles i use to move more features.

the chain premise is about a preference, not a rule. i often test subsystems, or even through subsystems, but the chain premise pushes me towards the smallest and lightest subsystems that will prove the point.

the correlation premise: the internal quality of my code correlates directly with the productivity of my work. we could say a lot about external quality (EQ) vs internal, (IQ), but i’ll shorthand it here. EQ is things detectable by an end user. IQ is things detectable only by a programmer with the source.

the correlation premise is hard for some folks cuz they confuse those, EQ and IQ. here’s the thing, you *can* trade EQ for more features faster. you can. if my customer doesn’t care if it’s slow, i’ll finish faster. if she doesn’t care that it’s ugly, i’ll finish faster, doesn’t care that the program’s rules don’t map perfectly onto the domain’s, same.

consider the polynomial of my daily productivity. two big terms are 1) my skill, and 2) the domain’s complexity. if i hold my skill and domain complexity, what’s the next biggest term, the one that dominates my production function? it’s the quality of the material i start with. this is the correlation premise doing its work.

finally, the driving premise. tests and testability are first-class participants in design. when we sketch designs, we consider many 1st-class factors. the two non-TDDer’s and noobs focus on: ‘will it work’ and ‘will it be fast’. but the driving premise says the third first-class question is ‘will it test’.

tests & testability considerations *shape* our code. the biggest block i’ve seen in young’uns getting to TDD is their unwillingness to change designs to make them readily testable. the driving premise says we have to. there’s a tangent i’ll offer, maybe later today, on soft-TDD vs hard-TDD, but either way, we have to.

so those five premises underlay almost every move a TDD’er makes. and this is all pretty flavor-agnostic. it cuts across schools pretty well.

we’re in this for the money. we rely on human judgment. we tests chains link by link. we keep IQ high. we shape design w/testability.

my coaching and teaching friends, if you’re out there doing your thing today, please talk to folks about the premises you’re using. we absolutely have to get past this thing of creating faux-systems of rules, and in to transferring the heart of TDD.

Hard-TDD vs Soft-TDD

alrighty-then. this hard-tdd vs soft-tdd thing.

a couple of days ago, i worked through some underplayed premises of TDD, here.

along the way, i touched on what i call hard TDD vs soft TDD.

the terms derive from AI, where proponents differ on soft-AI vs hard-AI. a semantic association, not a real analogy, so i’ll skip that. hard vs soft here isn’t about technique, it’s about what we believe the value of the technique includes.

and don’t be confused, there are (at least) three positions: no-TDD, soft-TDD, and hard-TDD. i am a hard-TDD man, myself.

the terms have to do with the value of TDD. most standard discussions of TDD as a process look at the results and offer the extrinsic value produced by them.

the tests are good because, variously, they’re progress pitons, they’re living documents, they’re validators, and so on. i believe all that, of course. hard-TDD certainly incorporates all of soft-TDD and none of no-TDD. but hard-TDD makes a very bold claim.

TDD designs are *better*, even if we deleted all the tests.

take 2 codebases, one produced w/TDD, one produced w/o. do not consider the tests themselves. hard-TDD predicts the w/ is better designed. this sounds mystical, and i’m sure to hear about that from those who (naturally, healthily) resent the dogmatic behavior of some TDD folks.

but it’s not mystical at all. i haven’t worked out all the links rigorously, but i’ve made a start, and i’ll sketch that out here.

consider the SOLID principles. you don’t have to be 100% sold to see that they clearly have merit as design forces, yes? after all, in many ways they’re restatements of 40 years of work defining and describing “good design”.

take a case like ISP, the interface segregation principle. this idea is about proper ‘jointing’ of problems when designing. it pushes us two directions at once, towards smaller classes with shorter API’s, and, along w/the SRP, towards tight self-contained APIs. SRP = single responsibility principle, the idea i used to call Just One Thing[tm].

so these principles strongly suggest we give grave consideration to how we break problems up, and that we attend closely to size. and here’s the thing. that is *exactly* what the practices of TDD drive us toward as well.

i should pause here. it’s clear to me that “my” TDD isn’t everyone’s.

the noobification of everything has spawned mantras & flowcharts galore. when i say TDD, i mean the loose catalog of moves i’ve been led to. they include precepts from CI, from TDD itself, from experience owwie and otherwise. the judgment premise should remind us all of that. i don’t have or teach a flowchart for my TDD in any simple format. i have a toolbelt with a lot of tools. i use my judgment constantly in applying them.

all that having been said, the closest phrase i have to describe what i do is in fact TDD. so i will stick with it. the world needs another X-driven-Y label like it needs more holes in its wobbly little head.

back on track, then. the ISP+SRP isn’t the only force that TDD leans us towards from the SOLID world. microtesting pushes me towards the DIP — dependency inversion principle, colloquially, don’t make imporant things depend on details.

when i’m microtesting to make some new class, i am constantly spawning collaborations and putting those in other classes. not from a sense of design purity, but from a sense of rich, raw, unmediated, overpowering, umm, laziness.

it works like this. i know X is gonna need parts that deal with Y. i don’t roll that work into X, i push it to Y. i do this because i am after X right now, the heart of X, the center of X. i’m not after the detail of the Y-parts.

think of it this way. i need to parse these lines. that’s the central job of the CharacterParser class. and CharacterParser does not care where the source lines come from. it doesn’t matter. that is unimportant. i don’t hesitate to name the Lines class (or in this case, use the modern line-oriented stuff built in to the JDK).

i give my CharacterParser a Lines object. parsing out d&d characters doesn’t depend on where that text comes from. the full solution depends on that. but the force of wanting to microtest the interesting part of CharacterParser leads me quickly to DIP.

and that’s it, for now, on my underlying reasoning for backing hard-TDD.

there are other variants on “what is a good design” than SOLID, some older, some newer, some in conflict. much has been made in recent years, partly via functional programming, of immutability & function composition, for instance.

as a TDD’ing microtester, i love immutability and i love composition. not because of the deep insights of design theory. because they are easier to microtest. immutability is nice because there’s no invisible state, my methods all return testable objects, and i never have to reach inside. function composition is nice because it makes it easy to satisfy myself that A works, B works, and + works, and thus that A+B works.

Hard-TDD, which is my current working stance, is that testable designs are better designs *intrinsically*, even w/o tests.

they’re better because the natural act of being a lazy TDD’er using microtests is to shape designs towards our best current grasp of “good”. that’s not a universal position. fortunately, the soft-TDD case is itself pretty compelling, and hard-TDD certainly embraces it.

thanks for tolerating. if you want to talk about any of this, just give me a ping. tip your waiters, they work hard.

The Noobification Of Everything

friend matt asked me to elaborate on “the noobification of everything” in the geek trades. this is a floppy vague one as yet, so be prepared to play fast and loose.

the so-far endless demand for new software has created a poor skills distribution curve into the trade. divide ever-so-arbitrarily our ranks into 5, dreyfus-style or thereabouts. 1’s know where to put semi-colons. 5’s know as much as we all know about geekery.

we have way too many 1’s for our 5’s. the world needs us, tho, desperately. so the in-between ranks get increasingly filled with psuedo’s. people in those intermediate ranks are teachers & coaches. but they’re not actually much better at geekery than the 1’s they teach. in everyday parlance, they’re a half-chapter ahead. did u finish the book before the others? *abracadabra*, you’re a 4!!

this force drives the trade downward. it makes us favor rulesets over judgment. slogans over thought. it leads inevitably to the thing i’ve called “idols of the schema”. it feels we *have* to create known-bad rulesets just to stay in it. and humans being what we are, we can’t call those known-bad sets “known-bad”. there’s control at stake, and morale & ego & money.

people, lots of people, have been taught “never” and “always” based on ideas that just aren’t that good.

it’s one thing to teach an “always” when you mean “97.9% of the time”. it’s another thing to say “always” when you mean “57.3%”. and because our psuedo-2’s and psuedo-3’s have neither time nor 4’s to help them along, there we stay.

traditionally, the 5’s are the oft-lonely explorers, out there forging new ideas for us all. they trend against forwarding any but 4’s. and we wind up further and further away from having a body of knowledge & practice that actually works.

we just sell rulesets that don’t work.

i call this the noobification of everything. i call it lots of things, actually. when u hear me talk about idols of the schema, i’m talking consequences. when u hear me talk about “insufficient paths out of shu”, the same.

we are not, as a trade, improving the performance of our juniors. we’re just hiring more 1’s, over & over.

we give them a stick, point out which end the bullet comes from, and send them over the wall. we collapse into idol debates on the border between categories, flavor wars, drive-by critiques, and magazine-cover bullshit. (heh, typo “idol debates” was meant to be “idle debates”, but actually, that’s not bad.)

and that’s what i got. the noobification of everything leaves me deeply demoralized, shotgun-surgery angry, & not doing what i want. is there a way out? hell, i don’t know. i’ve heard proposals, but most of them — being pissy here — come from psuedo-2’s.

here’s what i *am* doing, in the likely vain hope that it will help.

i am resisting rulesets, flavor-debates, and hierarchical controls, everywhere i see them.

i am searching my active practice for what i am calling “plays” in a “playbook”, a catalog of moves i make. i am stopping the pretense that professional geekery is reducible to “coding”. i am raising my position in the trade so i have a more wide-reaching pulpit.

i am *not* building a new list of never’s and alway’s. i am not debating corner cases. i am not going to produce a new theory of software.

i am trying really really really hard to be kind, not mean, encouraging not disparaging. as much as i can manage. and i’m forgiving me so i can forgive others. i’m searching out my mistakes and laughing with them.

i’m trying to be at the same time *more* passionate and *less* definitive.

that’s all i got. thanks.

*now* see what u did!?!