“Avoid Changing Code” Should Be Avoided

Of all the bad advice the geek trades inherit from over-simplified manufacturing & engineering, perhaps the worst: “avoid changing code”.

Take a minute, and think of all the things the trade does to avoid changing code. Virtually every element of “big design up front” policy is based in this view. Long stretches for analysis, longer ones for design. Planning cycles that extend years into the future. Years. I am not exaggerating for effect. Years. Contract-like specifications, database designs that start with hundreds of tables. So-called security systems that make exposing an endpoint a six-month four-committee process. All of the variety of viewpoints we call endpointing — trying to create a complex system by drawing a straight line from here to there. Think of this huge investment. And it’s all because we are trying to avoid changing code.

Why? What’s so bad about changing code? There’s a long list of potential answers to that question. But they seem to fit conveniently into two rationales.

Rationale 1: Changing code is bad because it means doing the same work twice. (An inheritance from manufacturing faux-efficiency.)

Rationale 2: Changing code is bad because it introduces potentially wild uncertainty. (An inheritance from engineering faux-science.)

The argument from efficiency is rooted firmly in construction. If you don’t dig the basement now, it’s far harder to dig it later. One hears that exact metaphor with some frequency. And it has some application to software, but not much. There are a small number of things it’s best to implement first. These take the form of things we think geekfully of as “aspects” in aop.

Localization is a good example. The later we put off localization, the harder it gets to do. But most of software isn’t like digging a basement. Most of software isn’t aspect-y like this, touching everything everywhere. Most of the things we worry about when we endpoint our systems are things the geeks could change any time we wanted at very low price. Now, let’s be careful here. Not all geeks. 🙂 People trained heavily in change-avoidance need to be re-trained before they can do this. Code bases mired in change-avoidance need to be re-worked before this can be done to them. The point is, if we’re not avoiding code change, code change from an efficiency point of view is basically net-neutral. (More later).

What about the argument from uncertainty? The gist: “Don’t touch that, you don’t know what it’s connected to.” Whenever I hear this, I think to myself, ummm, gee. Why don’t I know what it’s connected to? The idea’s that computer software is a dark mysterious place, full of indeterminacy and randomness. Changing code means radical uncertainty. This is, to put it as mildly as I can bear to, “not true”. Computers work because they do the same thing in the same situation every single time. That’s the exact opposite of uncertainty.

To be sure, them doing the same thing in the same situation every time can confuse humans. This happens because we think the situation is the same but the computer does not. Computers are very picky, people are loosey-goosey. But changing code doesn’t introduce uncertainty, it reveals and heightens uncertainty that was already present. Another way to say this: If you know exactly what the “before” does, you can easily guarantee that the “after” honors the parts you keep. Same provisos as before. Geeks need to learn how to do this, and codebases need to be built/altered to support it.

My bottom line here: “Avoiding code change” is fabulously expensive and entirely unneeded. It promotes neither efficiency nor certainty. I mentioned before that the efficiency difference between changing and not-changing code is nil. If we left it there, why choose code change? Because code change enables continuous harvesting of value, and avoiding code change disables it. Efficiency and certainty are certainly related to value harvesting, I wouldn’t argue with that. But they are not the only factors. In my forthcoming series on optimizing for complexity, I offer a sub-problem: Optimizing for sustenance.

The key idea here is that orgs can sustain themselves while they develop software by frequently harvesting every bit of value they create. This comes down to continuously revealing increasing capability to users and using their feedback interactively to drive and feed our efforts. Avoiding code change can not do this. It suppresses exactly the attitudes, techniques, and experience we need to harvest value interactively.

Umpty-nine years ago, when all these old agile guys were young and less sullen, a guy named Beck wrote a book that was foundational. The name of the book is “Extreme Programming Explained”, and it has a particularly revealing subtitle: “Embrace Change”.

Embrace change. In code, in people, in process, in market, in model. Embrace change.

Shouts out to friend @kentbeck. For all the changes since, all the wrangles and aggravations and owwies, you hit that nail square on its head.

Using Strategy & Function-Specific To Attack Large Classes

In IT work — “it puts the database on its browser skin” — we often face sooner or later the problem of the gigantic object.

Some systems, especially “global monolith reboots”, just start there. Others just grow and grow and insensibly slide into it over time. The problem is that there are one or a few objects that are just plain central to the domain. To forestall ron and his marick quote, I’ll use the classic ‘order’ as my example here. Imagine a system that is meant to interact with orders all the way from manufacturing to stocking.

Planners give detailing to orders, then assign them to production facilities. Those folks use an app to update state right up to logistics. Logistics moves the orders from vendor to vendor, sometimes as simple as parcel delivery, sometimes shipping containers, and so on. Vendors use those orders to track their inventory, monitor their part of the pipeline, add/alter as their marketing changes shapes.

In beginner’s OO, we do a thing that olbs sometimes call “noun circling”. Go through the specs and circle all the nouns and call them objects. Noun-circling isn’t bad, per se. But it can be dangerous for the inexperienced, and as Bob Martin pointed out, inexperience is the norm. All of this is why I recommend that my intermediate oo friends invest heavily in understanding two closely related patterns.

These are the strategy pattern (from gof) and the function-specific pattern (from I made it up from so many owwies I have lived through). Strategy was originally designed for supporting variant algorithms for operations on an object. If there are two ways to calculate the gross weight of an order, the strategy calls for me to have two classes, one for each algorithm. These classes are inside the skin of order. Depending on domain, the order (or a client) chooses one, and assigns it as a member then when another client asks for the gross weight, the order uses its own strategy field to calculate the correct response.

This has nothing to do with solving the problem I laid out, but in fact, the exact same mechanism can be used in a number of ways. Forget two ways to gross weight. Consider two dramatically different algos, like compute gross weight and select best production line. We use the same basic mechanism, placing a class as a member of order, but we use those classes for completely different purposes. Our very large order object becomes smaller each time we do this — if we decompose reasonably well.

The first few times one implements strategy, one meets some delicate questions. How do I get the data to the algo object is a big one. How/when do I set or change strategy is another one. I have no set rules for this, I’m a stepwise experimenter by nature. You just have to try it different ways until it starts to feel tight. Do that often enough, and you’ll get quite good at it. I often find myself holding just one data field in the monster object: the key I need to find anything I want about an order. Individual strategies do the rest of the work. This works well, but can eventually cost you performance. Wait til it does, is my advice.

Now what about the function-specific object? This is what I call a kind of external strategy.
Here, the order doesn’t keep its own strategy’s internally. Any strategy can be applied to any order by a client. (this is nothing more than a complex functor, an object that represents an actual *function* rather than a noun. Nothing new under the sun.)

Now push one more time. What if I make a class that a) is an order, but b) is customized for, say, plant-choosing. Its methods are tightly focused around all the things one needs to do to choose the right plant for building its order. It exposes few or even none of the underlying order data or its generic operations. Instead, it just does plant-selection. I have taken the key+functor idea of strategy and I have bundled several related strategies into an object “on top of” an order.

This is not rocket science. Most geekery is not rocket science. But it’s not the kind of thing a noob can see without prompting. And it’s not the kind of thing an intermediate can do without practice and experimentation. So the next time you see a very large object — lots of data, lots of operations — maybe you can get some new ideas for how to wrangle it.

A key insight: working this way means I no longer have to design either my order or my order’s underlying db in one massive step. I can always add new function-specific orders. I can keep running my tests against my old ones to prevent interaction effects and I can refactor my db right underneath all this without having to change any of the client code. The fundamental challenge of massive it is to find ways to do it one step at a time. Strategy + function-specific are a start.

Autonomy: How Freedom Correlates To Urgency

Let’s talk about the A from RAMPS today: Autonomy.

Autonomy is “simple not easy,” just like the rest of the motivating forces. Autonomy is the sense of an individual that she is free to work the best way she knows how to achieve the tasks in front of her. We call self-driving cars “autonomous”. We do that to contrast them with cars that are controlled by humans: machines. If I say an individual doesn’t feel he has autonomy, I’m saying he feels like he’s just a machine at work, controlled entirely by others.

In my experience, autonomy is always set lowest on the slider in the IT departments of large organizations (VBCAs). The classic definition of non-autonomy is that of totalitarianism: Everything not expressly required is expressly forbidden. Notice that this is a game of boundaries. Organizations always have boundaries, rules drawing lines, beyond which we don’t go. As we’ll see when we get to interaction effects later, boundaries are best situated carefully with respect to purpose. Almost every restriction on my working freedom removes motivation from me. BUT, if it balances with other sliders, it’s worth it, net +.

The most disastrous boundaries I see are usually the ones justified by “efficiency” (the scare quotes are intentional.) And the second class of disastrous boundaries are those justified by what we call asteroid worries. “What if an asteroid strikes the company’s campus?” is an asteroid worry. It simply doesn’t bear any weight of concern at all. Because if an asteroid strikes, the comment blocks you require before every method JUST WON’T HELP.

How can managers affect autonomy? In ways big and small, actually.

We increase autonomy by reducing the set of boundaries to the smallest set we can live with. By rigorously connecting them to purpose. In already-demotivated teams, by strongly encouraging step-wise experimentation around pushing and probing and meeting those remaining rules. The biggest known bang for the autonomy buck, the one that gets the most press, is quite largescale, and it has to do with time. Companies reduce or eliminate fixed hours and fixed locations for their workers. Many companies have run many such experiments, from killing time clocks, to offering X days a week of “work from home”, and even completely eliminating every requirement that a worker be present.

Here’s a fascinating thing: Almost none of these companies have abandoned their experiments, and many have made them firmest policy. I’m gonna ask a bitter question: Do you think these orgs do this because they love their employees more than their stockholders? No. I’m sorry. They don’t. They do this because their employees get more work done without the rule than with it.

The range of moves towards autonomy are hardly limited to hours and location, tho. There are myriad opportunities large and small. Almost anything that makes me more of a human and less of a machine increases my sense of autonomy. Let’s look at some small ones.

Almost all “standard process” efforts reduce autonomy. Try making every such standard fit on one page. Got a coding standard? Make it fit on one page. Got a database normalization standard? Make it fit on one page. And so on and so forth. I should be able to hand my new geek ALL of what she’s required to do as a geek on three or four pages. How laborious are your rules for checking in code? I’ve a client whose teams take half-an-hour FOR EVERY PUSH. Lose that fast.

As orgs grow, there’s a huge pressure to standardize. Resist that pressure with great ferocity. The rationale is that we must have every team work the same way to work together. It’s simply bogus. I’m sorry, but it is. What we must have is every sub-team interacting with every sub-team in their view in a way that works for both sub-teams. The idea that that means they all work the same way is a boundary too far, and a classic instance of Idols of the Schema.

I’ll drop one more note. Remember my starting question was about how do I get my team to feel a sense of urgency? By the time we ask that, we’re in trouble. And much of it comes from over-bounding, from loss of autonomy. We’re in recovery mode.

There’s a special technique here. “Catch them doing something wrong, then bless it.” A team eschews the standard manual check-in and rolls a script where they ask the user two questions and automate all the rest. Bless it.  A team rolls code that is excellent and needed but not part of the standard process? Bless it. After-the-fact blessing of individuals pushing boundaries, small and large, that’s how you bring them back to a sense of autonomy.

When I meet a new team, they’re almost all wondering what kind of random crappy new system I’m going to force them to use. “What fresh restrictive hell is this?” I find the easiest-to-fix owwie that they feel, and I say, this is stupid, let’s fix this. Rinse, lather, repeat. After a few of these, they start to get it: My early big mission is to free them to take control over their work life. I don’t care about method or system. I don’t care about rules. I care about whether they are gaining control over their work-life.

Sometimes, I go to the grandboss and say, “They want to try this, and they’re scared of you. Don’t notice it for two weeks, then bless it.” (I’m corrupt.) A few weeks later, the grandboss comes by and says, “I see that you’re not doing X, you’re doing Y. How is it?”… she says, “Great! You folks are rocking.” and walks out the door.

This is the granting and encouraging of autonomy. It’s resisting standards for the sake of standards. It’s resisting asteroid rules. It’s telling trusted people that we trust them to help us solve problems, without telling them how. Teams that feel in control of their work out-perform teams that don’t by orders of magnitude.

People want to succeed. People don’t want to be machines. Hold those two ideas close, and autonomy can drive your urgency-blues away.