jm.dev
j@jm.dev · @jmq_en
linkedin.com/in/jqq · github.com/jmqd
Tokyo, Japan 🇯🇵
Everything is a trade-off

To not bury the lede: Everything is a trade-off, so making a decision is about understanding which parameters you want to optimize, and their relative weights. This decision process can be easy, but sometimes it is exceptionally hard. Sometimes, a disagreement on the correct solution is actually a subtle disagreement on the relative weights allocated to each parameter. Framing these decisions as optimization problems can be a useful method for driving alignment or achieving consensus. At the very least, it can drive a mutual understanding of why the parties disagree.

For fun, let's model making a decision as some psuedo-code:

/// An outcome is something you decide to do, and it has Consequences.
struct Outcome {
    name: String,
    consequences: Vec<Consequence>
}

/// For our simple model, a Consequence can only be one of three things.
enum Consequence {
    /// How long until we get the outcome -- i.e. Wall clock time.
    TimeToComplete(magnitude: int64),

    /// How much "effort" will it be, in engineer-hours (or person-hours)?
    Effort(magnitude: int64),

    /// An Outcome will have certain Effects, for example if you decide to mow
    /// your lawn, an Effect might be Effect("House looks better", 30). For
    /// simplicity, we're doing this in a loosey-goosey Stringly-typed way, but
    /// it gets the idea across. The int64 is the magnitude of the Effect. Each
    /// Effect should abide by a contract about how to interpret its magnitude.
    Effect(name: String, magnitude: int64)
}

/// A function that decides between Outcomes, given some weights
fn decide(Vec<Outcome> outcomes, HashMap<Consequence, f64> weights) {
    // The decision's Outcome, and how well it scored. Higher is better.
    let mut decision: Option<(Outcome, f64)> = None;

    // Find the best outcome, by summing the magnitude of each outcome's
    // consequence's magnitude, weighted by the weights parameter.
    for outcome in outcomes {
        let score = outcome
            .consequences
            .iter()
            .map(|c| c.magnitude * weights[c])
            .sum();

        // If the outcome we're currently evaluating looks better than any of
        // the previous ones, then it becomes our defacto candidate decision.
        if decision.is_none() || decision.1 < score {
            decision = Some(outcome, score);
        }
    }

    return decision
}

Let me tell a story of an engineer's journey, with three chapters: Junior, Mid-level, and Senior.

Junior

A junior may have some preferences or ideas, but until you've seen how something fails, you don't have well-founded justified beliefs on how to best do things. Even more problematically, you might not be good at enumerating the possible outcomes.

There is a difference between keeping a beginner's mind and being inexperienced. A beginner's mind can bring salient fresh perspectives on a problem. Inexperienced juniors may have a beginner's mind, but not expertise. If you couple expertise with a beginner's mind, that's where the real gains come into play.

Mid-level

A mid-level engineer has started to see how things can fail, and begins to develop strong opinions on how things ought to be. These engineers should be reasonably decent at enumerating the possible outcomes, but they may inevitably miss some important possibilities. A mid-level engineer develops heuristics like Gang of Four patterns are best practices, or everything should be immutable, or good code means no duplicated logic.

The thinking patterns often held by mid-level engineers work in 90+% of cases. They're good enough, for most things. But they break down. Best practice is an oxymoron. A practice is only the best if it's always the best, and nothing is best for every case. Absolutes like "everything should be X" are useful as an approximate default case, but they don't hold true in all cases. Nuance is everywhere, and second- or third-order consequences matter.

A good example of second- and third-order consequential thinking presents itself in the question of code deduplication. Many of us grew up learning how terrible duplicated code is. But did anyone ever teach us how bad deduplication can be? Deduplication is normally achieved by introducing abstraction. This is where the question of Consequence.magnitude comes into play. Ceteris paribus, that code duplication is to be avoided. But all things are not equal. The cost of a wrong abstraction is in fact orders of magnitude worse than duplicated code. Then, we can model this as follows: do we allow some duplication, or do we risk the wrong abstraction? Generally, you should allow duplication until that duplication becomes an actual problem. By that time, you should have sufficient experience in the problem space that the likelihood of choosing a bad abstraction is sufficiently low. Some present this idea as the rule of three. I propose to go even further, the rule of pain: duplicate until it hurts, then refactor. The reason is simple: duplication pain grows approximately linearly, but pain from the wrong abstraction is unbounded and grows in super-linear ways.

Senior

A senior understands that everything is a trade-off. Even things that seem obviously correct are in fact a trade-off — it's just that your weights are so skewed that it doesn't seem like a trade-off to you.

Things a senior might think:

Things a senior might think about: