There is a robustness-efficiency tradeoff

Systems must trade-off between being robust and being efficient. Understanding this is important to deal with inherently complex (Complexity) systems. So it makes one a better entrepreneur, investor, ecologist etc.

To decide, consider the cost of failure and how easy it is to reverse engineer.

If failure is cheap or easy to recover from, then optimize for efficiency. This is behind all the slogans that aim to quicken the OODA Loops.

“If you are not embarrassed by the first version of your product, you’ve launched too late.” – Reid Hoffman

If failure is costly and hard to recover from, then robustness is more important. Nuclear engineers do not “move fast and break things” in the name of efficiency. “Measure twice, cut once,” is the motto in this regime.

“First, never use a one-size-fits-all decision-making process. Many decisions are reversible, two-way doors. Those decisions can use a light-weight process. For those, so what if you’re wrong? I wrote about this in more detail in last year’s letter.

Second, most decisions should probably be made with somewhere around 70% of the information you wish you had. If you wait for 90%, in most cases, you’re probably being slow. Plus, either way, you need to be good at quickly recognizing and correcting bad decisions. If you’re good at course correcting, being wrong may be less costly than you think, whereas being slow is going to be expensive for sure.” – Jeff Bezos

The scientific forestry example from Seeing Like a State is an example of favoring efficiency. They could easily track and harvest timber, but although the production kept increasing, it all came crashing down when a disease stopped the whole industry. The single species plantation had removed all diversity which helped localise infections and keep the system surviving.

Sometimes optimizing for efficiency in the short run results in a build-up of hidden risk.

“Premature optimization is the root of all evil.”

The important takeaway from this lesson: For systems where failure is catastrophic, attempts to increase efficiency in the short run often lead to less efficiency in the long run.

Is failure cheap or easy to reverse? Move fast and break things.

Is failure devastating or hard to reverse? Measure twice, cut once.

Related: Amazon’s structure makes each piece relatively autonomous so failures are localized. Allows moving fast locally and remaining robust globally.

This is Decentralized command (Four laws of combat).

Similar idea in federal systems where authority is pushed down.

References: https://taylorpearson.me/interestingtimes/robustness-efficiency-tradeoff/

Notes mentioning this note

There are no notes linking to this note.


Here are all the notes in this garden, along with their links, visualized as a graph. If you don't see any nodes try zooming and panning in the grey area.