How Long (Technique Remix)

how long (redux)?

in the technical side of the modern synthesis, we develop code by writing and passing tests then re-working the code to make it as change-enabled as we can.

the key “how long” aspect to this: how long does the code stay not working? that is, how long in between the times we could push the code straight to production? the desiderata in the modern synthesis is that we try to measure that number on a scale using only two digits of minutes.

it’s as if we’re playing a game, and the object is to keep the application running correctly even as we are changing it. over time, as you get stronger and stronger at it, this becomes an obsession for the geek. it’s a great fun game, btw, and an old hand like me derives much pleasure from the challenges it presents. like many others, i not only like playing it for real in my codebase, i also enjoy playing in my imagination for other people’s code. 🙂

but of course, it’s not *there* just for the kicks it gives us. there are several benefits it provides for a team that seeks to ship more value faster.

  • first and most importantly, it narrows mental scope. recall that managing this, the number of conceptual balls you’re jugging in your mind, is a key part of being able to move quickly in code.
  • second, still valuable, it reinforces a deep commitment to “always shippable”. the drive for ways to change the code in small chunks without breaking the app frees us from complex source-branching strategies, and lets the value-defining teammates “turn on a dime”.
  • third, it has its own indirect effects on the code base. code that’s built this way over repeated applications of the process is almost insensibly becomes fully change-enabled in to the future.

anyway, the effects of working this way are profound, and to some extent they define the technical side of the modern synthesis. this lunch isn’t free, of course, it involves coming up with solutions to at least three different kinds of problem.

the first problem is controlling how the change is exposed to the user. largescale changes in code don’t happen in ten minutes. that will mean there’ll be a time when a given new feature or changed behavior is building in the codebase, but shouldn’t be visible to non-dev users.

in a pure “add” situation, there’s a pretty basic solution: some kind of feature or rollout toggle that makes the added functionality invisible in one state and available in the other.

pure adds are relatively rare, though. most of the time, practicing evolutionary design, the code we’re changing is spread through multiple locations in the code. here things become more interesting.

two of the original design patterns come in handy pretty frequently here. 1) the Strategy pattern makes it easier for me to supply ‘pluggable’ behavior in small units. 2) the Factory pattern lets client code magically get the *right* strategy depending on toggle values.

a caution: the road to hell is lined on either side with “one ring to rule them all” strip malls that will make you want to only ever use one technique for controlling user exposure. we know what lies at the end of that road. 🙂

the second problem is the certainty problem. how can we be certain we have not broken the app as we make our changes?

the toggle-ish solutions from before can aid us here, *if* we have very high confidence that the toggling itself is idempotent. that is, if putting the toggle mechanism in doesn’t *itself* break the app.

a preferable solution, usually faster, once you’ve set yourself up for microtesting and refactoring, is to bring the app’s business logic under a high degree of scrutiny with microtests.

microtests are exceptionally cheap & powerful in asserting “this is how it works now”. if the awkward parts of what’s being tested are isolated, we can microtest around considerable chunks of the code. they give us *tremendous* confidence that we can’t go blindly backwards.

(note: that’s not certainty. there is no certainty. that’s confidence, though, and confidence goes a long way.)

the third problem we have to solve is sequencing. how can we chain together a bunch of steps so that a) we get there eventually, and b) the outcome of each step doesn’t break the app and c) we can still follow a broadly coherent path?

sequencing is the meat and potatoes for the refactorer, and when we change code, we very often also have to refactor what’s already there.

sequencing very often involves inventing steps that are “lateral” rather than “towards”. what i mean is, effective sequencing often involves steps that don’t immediately aim at the endpoint.

an example: it’s not at all uncommon to do things like add a class to the code that you intend to remove from the code four steps down the line.

instead of twenty-five steps, all towards the endpoint, we take one side-step, four steps towards, and one more side-step at the end. fewer steps is better if they’re all the same size. (forthcoming video of an example, in my copious free time.)

learning the modern synthesis — the technical side at any rate — is very much about just learning the spirit of this approach and mastering the various techniques you need to move it from on-paper to in-practice. it’s not an instant fix. learning this takes time, energy, and great tolerance for mis-stepping. but the payoff is huge, and in some of these cases it can also be nearly immediate.

it all begins with this: “how long?”

how long will the code be not working?

pro-tip: shorter is better.

How Long?

how long?

what amount of time passes between you saving the file you just changed and you seeing the results of that change?

if that answer is over 10 seconds you might want to wonder if you can make it shorter. if that answer is over 100 seconds, please consider making a radical change in your approach.

thinking is the bottleneck, not typing, we’ve been over this a bunch. but waiting you can bypass — *all* waiting you can bypass — is waste.

i know firmware folks who routine wait 20 minutes from save to run. why? because they are running on the target, and that means sending the image to the target, stashing it in the NVRAM and rebooting. every time. wow, that must be some intricate h/w-dependent rocket science stuff they’re working, eh? and sometimes the answer is yes. but i’ll let you in on a secret. not usually. usually it’s logic that’s got almost nothing to do with the peculiarities of their h/w target. i’m not joking. http and ftp servers, things like scheduling — *calendar* scheduling, not multi-task scheduling.

before we all pile on, tho, maybe we better check ourselves on this.

  • do you have to bounce a server or a service inside a container to see your results? do you have to switch to a browser? do you have to login and step through three pages to get to the right place?
  • do you have to manually or automatedly clear the cache or wipe the database? do you have to go look up stuff in a special place?
  • do you have to edit a config file, or just quick like a bunny bounce to the shell and type in those same 87 characters again (except for the three in the middle)?


and checking the results. do you do that in the browser, too? or maybe you study the log output. or, again, bounce to the shell and tweak 3 characters of the 87. of course, maybe you don’t check the results at all, one does see that from time to time. “it ran, so it worked.”

if i don’t know whether my change did what i wanted in 10 seconds i get grouchy. sometimes i suffer it for an hour, cuz the change i’m making is the one that will free me to get the faster feedback. but generally, long wait-states between change & results turn me into Bad GeePaw.

the answer, nearly every time, is to use one of several techniques to move your code-you-want-to-test from a place where that takes time to a place where that’s instant. sometimes doing this is a heavy investment the first time. most non-tdd’ed code is structured in one of several ways that make this difficult. demeter wildness, intermixing awkward collaboration with basic business logic, god objects, dead guy platonic forms, over-generalization, primitive obsession, all of these work against readily pulling your code apart for testing it.

the trick is always one step at a time. remember the conversation the other day about supplier and supplied? start there.

you’re either adding a new function or changing an old one, yeah? if you’re greenfield in your tiny local function’s context, try this: put it in a new object that only does that tiny thing. it sits by itself. declare one in a test and call it. lightning speed.

if you’re brownfield, you can do the same thing, but it’s harder. the key is to first extract just exactly as much of the brown field code you need to its own method. then generally pass it arguments rather than using fields. now rework it for supplier/supplied changes. finally, once again, pull it to a new class. note: you don’t have to change the existing API to do this. that API can new the object and call it, or if it’s of broad use, new it in the calling site’s constructor.

the hardest problem you’ll meet: high fan-out methods or classes. these are classes that depend directly on every other package in your universe. for these, well, don’t start there. 🙂

legacy rework is not for sissy’s.

When I Need to Not Pair

so, a friend asked me to say more about “not pairing”. as so often, it triggered me to muse.

sometimes i *need* to not pair.

now, don’t get me wrong, i love pairing. i love it for three reasons.

  1. it makes me a better geek. that is, i learn from pairing.
  2. pairing makes two geeks more productive than if they solo’d. that is a pair writes mo’ better code than two solos.

but there are times when i need to not pair. what are those times like?

well, first thing is, they tend to be less than an hour of keyboard time.

then there seems to be something about the circumference of me and my pair’s mutual experience. what i mean is that the problem, a coding or debugging problem, is right out at the boundary of anything either one of us have experienced.

the third thing is that it usually involves a situation where we’ve exhausted all the “dumb ideas” about how to proceed.

the fourth is that it usually involves me wanting to pursue a lengthy, i.e. >5 minute experiment. that is, i need to *drive* for more than 5 minutes in a single focused direction to come up with a next effort. that driving might be code, it might be surfing, it might be in the debugger, but by pairing standards it will be a long drive.

and finally, it never happens unless i’m also feeling personally frustrated with our progress.

so that’s my muse. sometimes i need to not pair for up to an hour. i tell my pair. we split up and rendezvous later.

@GeePawHill on Pedagogy In The Geek Trades

i find 4 major failings in both how & what we teach in geekery.

  1. we mostly don’t. that is, actual teaching of actual geekery-for-a-living is almost non-existent.
  2. we suffer in attitude & content & structure from the trivially flawed “information transfer” view of what teaching & learning is.
  3. we purport to more knowledge than we actually have, teaching crude guesses as if they were writ, and aphorisms as if they were Euclid.
  4. we withdraw the field, abrogating our responsibility and leaving it occupied by marketeering hucksters and know-nothings.

in the interests of rigor i could now stall with an elaborate causal analysis, but i’m gonna cut to the chase on this and keep moving. it’s ultimately happening because we are a *very* young trade that has had forty years of epic levels of insane demand for our output. i see more and more that this is the root of nearly all that befalls us in the geek trades: the seemingly insatiable demand for output.

i’d call out the “agile” movement, but truth is it’s everywhere in geekery. agilists didn’t invent and likely aren’t even primary in this. so i survey this field, and i see these things that i think must be Very Bad[tm]. i wonder what i might do about it all?

so? we can’t make people not want software. in fact, we don’t even want to make people not want software.

but what *can* we do, then? or, to reframe so i am less lecture-y-to-you, what can *i* do?

what i can do is, first, choose very modest projects or steps in the face of the radical uncertainty surrounding the whole enterprise. i especially like that cuz it’s what i do as a geek anyway. it’s just that i’m doing it in an area — education — where i don’t usually.

second, i can make those projects as fundamentally open and free as i know how to make them and still feed my fat little self & the geekids. in particular, i want to not be in it for the money and i want to *prove* i’m not in it for the money. a thing that means to me that it won’t, yet, to others, is that every project should have 100% open books. (i first learned of open accounting years ago from a book called “Honest Business”. the associated memes have changed, but not the guts.)

third, i can both constrain and expand my efforts with a focus i’ve never explicitly taken before: the daily life of expert geeks. i’ve always taught “information”. i’ve taught theory and technique, i’ve formulated “rules”. i’ve generalized & idealized many topics. i’ve quested for hierarchical knowledge structures and generally worked top down. i’ve overvalued theory.

and one thing i’ve really undervalued, too. and that is the fact that i am a me, and you are a you. plato, may he burn in a perfect geometric hell for all eternity, launched this great lurching horror of mythical not-personal “knowledge”. i have come very slowly to see this over many years. an overweening fondness for beige, “impersonal”, psuedo-scientific, psuedo-objective, psuedo-knowledge seems a particular enemy.

it’s just as much an enemy as the much more easily ridiculed hucksterism, terrierdom, and corporate sloganeering that seem so visible. let me put it this way. le geek, c’est moi. & it’s not isolable from the moi that farts, gets mad, does dumb things, and giggles at himself.

these are all things i have actually known for a long time, some longer than others, and steadily increasing as a presence in my output.

i’ve had many mentors. and i want to be very clear: every single one was a goof, *openly*, a human with all the twists that entails. moments of greatness, meanness, silliness, distractedness, and — we’re talking about mentoring *me* now — remarkable patience.

but i hereby go on record. that is, i’m telling you all this so i can hear i said it and i can openly embarrass myself in trying to do it.

and *that* is who i want to be and what i want to do.

Done With “Technical Debt”

i am done with “technical debt” as a useful concept.

as so often happens, an appealing idea happened, one with useful connections to the concrete, and we took it and ran with it. unfortunately, that idea has more appeal than it does decision-making value.

folks justify the most ridiculous nonsense and put it under the heading of “technical debt”. in particular, they write shitty code, i mean *terrible* code. they do this in the name of productivity. the claim is they are accruing technical debt, and that it’s just debt, and that they’re borrowing speed from the future to pay for it now.

here are some problems with that claim.

first, it simply isn’t the case that shitty code now makes more features faster now. that is, you can not go faster by writing shitty code. so right from the get-go the metaphor breaks down.

if i borrow Y money it is to get X now. i borrow it, i get X now, and I pay back Y + Z. would i borrow Y if i didn’t *get* X now? ummmm, no. so first, you don’t get the thing you’re borrowing for. that’s one issue.

second, debt in general, as my friend @YvesHanoulle just pointed out, is one of those “i’m good at it” things. people think they know all about debt. that, after all, is the appeal of the metaphor, right?

do look around you, tho. would any reasonable person say that the world is full of people who are good at debt? the idea that “i’m good at debt” is far more widespread than the reality of people being good at it.

so, too, with technical debt. the idea that “i am good figuring out costs+benefits for technical debt projection” is far more widespread than the reality. so second, technical debt’s appeal is actually based on a false premise, that most of us understand financial debt and manage it well.

third, unlike financial debt, there are no tables charts and formulae to help us parse our technical debt. there are no *numbers* associated with technical debt. no numbers means no math. no math means analysis based on intuition.

and we’ve just pointed out in flaw #2 that — even *with* numbers — skill at parsing debt calculations is actually quite rare. financial debt success — bounded by math — is rare. how much more rare is technical debt success — bounded by guess and supposition? so the third flaw: the non-mathematical nature of technical debt makes the successful management of it even *less* likely.

let me enhance your terror a little, too, by pointing out that the technical debt decision makers don’t even have technical credentials. how likely is it that a middle-level manager who lives in meetings, powerpoint, and outlook, can assess the cost/benefit of shitty code?

so. for me, technical debt is just out the door. it fails on at least three points to be a compelling or useful metaphor.

i should back up a little, tho. there is *something* there, and we must consider it.

my friend @WardCunningham cut a video years back about technical debt, in which he explained he never meant for anyone to write shitty code. what it seems he was talking about when he first spoke of technical debt wasn’t the kind of awful crap we see today at all.

rather, it was about deferred design decisions. in the hardcore XP mode we work largely without a big overriding design model. this notion is reflected in the idea of working by stories. we solve *just* the problem in front of us. we do not solve ahead. we do not even specify ahead. we take our challenge one story at a time, work that story to completion, and move on. this means that we are always, permanently, deferring the idea of “the once and for all best design for our finished app”.

this idea, the “permanent revolution” if you will, lies at the heart of the modern software development synthesis.

we have been slow and awkward in bringing this notion to the fore. this is the source of many flaws in our current attempts to do so. think about the story wars, the gradual rising attack on capacity-box planning, the usage of “technical debt” to justify shite code. all of this muddle comes from our failure to grasp and explicate the permanent revolution and its related concepts.

i’ll take a breather there. thanks for letting me muse!

How I Don’t Apply XP, or Scrum, or Anything

these wrangles over system seem mis-focused. moreover, they seem part of the surface of the elephant i’ve been trying to describe. a system is inherently an abstraction. it compresses, filters, selects, features from an experienced reality. we formulate systems for at least 3 reasons.

first, so we can establish commonality. that is, we can use one system to describe a bunch of “different” local realities. we can say, “yes, that’s python and the web, that’s c and the pacemaker, but with a little abstraction they are the same.”

second, we use systems to “reason downwards”. up from local maps to system, reason about system, down from system maps to local. we use this particularly when we encounter some new local reality and we need help knowing what to do.

and third, of course, we do this because we are humans and we can’t not do it. one could no more stop abstracting local realities into systems than one could stop extracting local oxygen into lungs.

but there are risks associated with an over-emphasis on this system-making system-applying behavior.

the elephant i’m feeling about trying to describe is there, in the systematizing. i just overheard someone asking a (perfectly good) question about how one would apply Scrum in a given setting. i see people asking these questions all the time. it could be about any scheme for getting folks to make code together successfully.

i never apply scrum. or xp. or kanban. i never apply anything.

and we can readily convince ourselves that it’s just a simple misuse of language. they just *mean* “how can i help in this situation?” but at some point, one begins to suspect that the simple misuse of language represents a far deeper misunderstanding.

students of perception are familiar with this. the naive model says that we “perceive” the world, like a video recorder. and as you wade into perception, you quickly realize that, if we *do* record our experience, it’s a fairly low-fi recording. so they downgrade the fidelity of the recorder. and the more we investigate, the more we have to downgrade that fidelity.

guess what, tho. at some point we’ve downgraded the recorder’s quality to the point where it’s misleading to call it a recorder.

so with our usage of these systems, is where i’m headed.

the design patterns movement was subverted, corrupted, marketed, generally turned into a money-losing nothingness at least as fast as XP. but one thing i really liked about the intellectual underpinnings of that movement was its steadfast resistance to global systematizing.

every pattern was local. every pattern tried to describe the forces at play, to express the *multiple* equilibria that could be achieved. “Observer”, bless it’s little dependency-reversing heart, is not *better*. Unless, of course, you have a dependency you want to reverse.

the systems, scrum, xp, kanban, lean startup, ad infinitum ad nauseum, all seem to jerk us into betters-ville.

and they do their gunfights in bettersviile. but bettersville isn’t local. it isn’t contextualized. it isn’t faithful to any reality.

when i hit the ground at a site, i don’t apply XP. i help identify heartaches. i help open paths & minds to their resolutions. i take steps. and only a complete fool would suggest that the steps i propose aren’t informed by XP et al, as well as by my personal experience, not to mention the transmitted experience i’ve surely mangled from my mentors and teachers.

but i am not putting in a system any more than a perceiver is a video-recorder.

i never apply scrum. or xp. or kanban. i never apply anything.


Musing: Refactoring Testless Code

refactoring in testless code is hard.

it’s the perfect demonstration of the manglish “agency” of code.

it is simply not possible to change testless code and guarantee you’ve done no damage. it’s one of the most delicate operations geeks do. there are principles, yes. there are tricksy techniques, too. but mostly, there is experience & judgment. the deep trick is to turn every mistake you make into a microtest that would keep it from ever happening again.

a key insight: never debug in legacy when you’re refactoring. the code that’s there *is* the code. it’s been shipping. it *is* right.

in that way, we can gradually pull ourselves out of testless legacy and into gradually increasing confidence. so. that’s all. be of good cheer. you get out of legacy the same way you got into it, one tiny step at a time.

oops. one more thought. noob TDD’ers won’t yet understand this, but the act of retrofitting microtests will reveal the correct design.

that is, a microtestable design *is* a good design, to some arbitrary epsilon. making it microtestable *is* fixing it. as you proceed with non-net-negative endpointless steps, the code will start to scream at you about the major refactorings that are needed.

Shifting Certainties

shifting certainties. this is where i’m headed these days.

without belaboring criticism, what i’m seeing is that we have a trade with a whole stack of roles and humans to fill them, and, of necessity, they have assembled a varied, sometimes compatible sometimes not, set of certainties by which they navigate.

the trouble is that, even when the certainties align with one another, they, ummm, aren’t. that is, they aren’t certainties at all. neither our data nor our experience actually back up most of them.

so for a couple of years i’ve been all certainty-abolishing in my tone. that has worked exactly not at all. because we *need* certainties, accurate or no. to live in perpetual doubt is not a common human capacity.

so now i see that it’s not that i can just abolish the certainties, i have to find replacements for them. alternatives. i want to stop saying “let go of this,” and instead say, “grab hold of that.” that’s what i’m calling shifting certainties.

i have a list of them, partial and likely incorrect, with the “from” on the left and the “to” on the right.

some examples of what i mean…

  • from “motivate geeks” to “avoid de-motivating geeks”. from “transfer information” to “provide experience”.
  • from “argue from theory” to “design experiments”. from “endpoint-centric” to “next-non-net-negative stepping”.
  • from “being right” to “building rich consensus”. from “no mistakes” to “mistakes early and often”.

there are more, but that offers a sampling. it’s all pretty inchoate for now. but in the last few months i’ve come under certain influences. and they are enabling me to — maybe — formulate a model i can explain and demonstrate that puts these certainty-shifts into perspective.

thanks for letting me muse. i’ll doubtless be returning many times to this shit. work-in-progress, don’tcha know.

The First Coaching Days

i can’t over-emphasize for new coaches the importance of rampant opportunism. until you’ve established your miracle powers in a team, you won’t be able to move big levers, only small ones. which small levers will bring you the biggest bang of trust & faith the fastest?

some possible openings: we find a bug that’s an exemplar of a *family* of bugs, and we refactor so it never can occur again. or we have an untestable, if they’ve started TDD’ing, and we change it so it’s now testable. or, rally sucks & exhausts us, so we make a below-the-line/above-the-line, keep only above in rally, and rotate a maintainer/victim role.

very geek-centric, and that’s not by accident. when i’m called in, it’s often from the top or side. i need to gain traction on the floor. you gain traction by being perceived as having already helped.

a key insight: stay mostly out of their hair in the early days. i start every gig by telling the team i realize they work for a living, and that i will be asking for very little until i find my feet and they find some confidence in me.

the “fast talk with mike” is a great technique for this. i ask for two things. 1) a loaded dev box that’s mine, 2) 20 minutes a day.

the rules for fast talk are simple: “shut up and listen and watch”. “mandatory for first 2 weeks”. and “no agenda in advance”. i spend the mornings in their code base, maybe paired, maybe not. i draw lessons, learn what tricks they’re missing, & show them in fast talk. typically, by the end of that 2nd week, i’ve had dozens of sidebars where people say, “show me how you did that.” or “take a look at this.”

it’s all tiny steps, tiny steps. when you’ve taken all the first level of tiny steps, the second level that was too-big looks & feels tiny. so then you start on those. and so on. eventually, if you are actually making life easier, they will ask you to solve *hard* problems.

and you’re on your way. 🙂

a word: sometimes your floor manager wants you to do more faster. this is an opening, too. i say, “i *am* doing more faster, you just aren’t in a position where you can tell.” i’m changing AND modeling how change works AND helping. i am often asked by these folks how they can tell it’s working. i explain, you will know if it’s working, i will show you the indicators.

the key in the short term at the beginning: actually help right now in little ways.

another opening: most of teams already have a standup of sorts. most of those standups are weak and ill-loved. the kernel of the problem: people talk too much. there are several flavors of that.

1) people solve problems in standup. don’t answer any question that takes longer than 2 sentences to resolve.

2) people go around the circle of attendees instead of going around the work in progress. focus on the work.

3) managers tend to be focal points, so they turn into reports-to-managers. ask them to lay low or even not come for a week or two.

4) some players want in on every single conversation. that’s a hard one to crack. easiest: tell them to walk around after and set up times.

fixing standups is such a little thing, but it’s easy and it helps from day one, and that’s what you want, early on.

On One Ring To Rule Them All

i’m thinking of this thing called “justifcation privileging,” or alternatively “explanation monism”. or even, short hand and jokily, “one ring to rule them all.”

one constantly sees tweets, blogs, even books, where someone boils down staggeringly complex and ill-understood processes to one factor. today i saw “people don’t make decisions rationally, they make them emotionally.”

now, set aside for the moment that no one even knows what those words mean other than at some vague gut-check level, even then, it’s just not so fucking simple.

why did i rush in there w/o a test? why did i first write a test? why did i *anything*?

the real, serious, answer, for anything i don’t do 100% of the time, is . . . are you ready? . . . here comes . . . “idunno.”

i can always explain away any action i can take. always. it is a human faculty. but all of those explanations are really post facto. and while some of them may have accurate elements, most of them should be highly suspect. including the ones from introspection. to say we make decisions from “emotion,” or from “reason,” or from “breakfast food,” or from any one thing, is either corrupt or naive.

humans are complex beyond any beggar’s dream. to assume as a general rule any one cause, even any primary cause, is a mug’s game.

or, and this is important, to assume it *implicitly*, *blithely*, as a matter of course. that’s what i’m really aiming at here. of necessity, assumptions are required. we *must* guess at others’ motivation. that’s not optional if we’re to act at all.

but to guess the same thing every time?

every time i don’t test it’s cuz i don’t know the value of testing? it’s cuz emotion? it’s cuz boss wants features? i doubt it.

there’s no one reason we behave. so there’s no one explanation for our behavior.  privileging a single explanation over and over again is a classic noob coach fail. believing it’s always reason X leads one to always reach for remedy Y. always reaching for remedy Y has a name: “noobism”. some folks always reach for a political explanation. others for a technical one, an emotional one, a rational one.

for noob coaches, the most common one ring to rule them all is “knowledge”. if i can just get these guys to *know*, they’ll act differently. but knowledge is far far far from being the basis on which most people act from moment to moment.

to be a better coach, be a better practical tactician, a better listener, a better watcher.

christ, after all these years, i just realized i’ve become an advocate for fox vs hedgehog approaches to coaching.

it’s funny, as i always found Berlin to be very smart and very unreadable.