Shifting certainties. This is where i’m headed these days.
Without belaboring criticism, what i’m seeing is that we have a trade with a whole stack of roles and humans to fill them, and, of necessity, they have assembled a varied, sometimes compatible sometimes not, set of certainties by which they navigate.
The trouble is that, even when the certainties align with one another, they, ummm, aren’t. That is, they aren’t certainties at all. Neither our data nor our experience actually back up most of them.
So for a couple of years i’ve been all certainty-abolishing in my tone. That has worked exactly not at all. Because we need certainties, accurate or no. To live in perpetual doubt is not a common human capacity.
So now I see that it’s not that I can just abolish the certainties, I have to find replacements for them. Alternatives. I want to stop saying "let go of this," and instead say, "grab hold of that." that’s what i’m calling shifting certainties.
I have a list of them, partial and likely incorrect, with the "from" on the left and the "to" on the right.
Some examples of what I mean…
- from “motivate geeks” to “avoid de-motivating geeks”. From “transfer information” to “provide experience”.
- from “argue from theory” to “design experiments”. From “endpoint-centric” to “next-non-net-negative stepping”.
- from “being right” to “building rich consensus”. From “no mistakes” to “mistakes early and often”.
There are more, but that offers a sampling. It’s all pretty inchoate for now. But in the last few months i’ve come under certain influences. And they are enabling me to — maybe — formulate a model I can explain and demonstrate that puts these certainty-shifts into perspective.
Thanks for letting me muse. I’ll doubtless be returning many times to this shit. Work-in-progress, don’tcha know.