ivychapel.ink

Two types of engineering for resiliency

Feb 9, 2018 Tags: #risk #engineering

I spend quite a lot of time thinking about the way we think.

In particular, I think a lot about the way we make engineering choices that mitigate risks. So, this is another brief note on the subject, following the beautiful metaphor I overheard:

There are two metaphors for resiliency and risk management in traditional engineering — the NASA way and the US Navy way.

I’m not entirely sure how they reflect reality, given the story of Apollo Guidance Computer’s design, and the fact that US Navy runs on pretty sophisticated tech as well, but they are very valid to make the point.

When applied to modern computer engineering and security, we’re rarely thorough enough to go full-on NASA on many things, and too quickly jump to US Navy way for things we’ve barely learned to cope with. This is a natural reaction (wanting to cope with risks fast), but I suspect that many risks can be engineered against using the NASA way much more efficiently.

This is the paradox that kept me intrigued for years, but now it has an illustrative metaphor to wrap it into language.

NASA way makes design mistakes fatal, where US Navy way converts design mistakes into performance/expenditure penalty of some kind. The reason why computer (and specifically, security) engineering intuitively shifted to the latter isn’t that obvious. The distinction between the NASA way and the Navy way lies in the nature of risks.

Engineering against laws of nature is a deterministic analytical process, it’s building against the risks that are quantifiable, that can be formalized in a succinct scientific statement. In practice, that’s an iterative process, with it’s own trial and error, hypothesis-test-correction loops, but each new thing you learn and systematize gives you predictive power against some risks.

If you can model most of the risks and each failure teaches you to model better - sooner or later you will reach the surface of the moon. But can you model a civil war unfolding in a country populated by people you don’t understand? Not really.

Engineering against unknown, non-deterministic processes, with hardly quantifiable results before it’s too late, and where most of the tests give you a distant reflection of reality is much harder. That’s why it seems intuitively reasonable to go the Navy way when you deal with things like building security systems.

But this has a few tradeoffs.

Trade-off 1. Most of the uncertainty of modern engineering comes from shitty engineering, not some magical source of randomness, and the more shitty engineering we flood the market with, the less deterministic the end result is, the less systematic infrastructures and defenses are:

Trade-off 2. We often forget that US Navy still relies on plenty of advanced and reliable technology constructed the NASA way. The baseline for situational awareness is being able to get good data, and apply reaction tactics with predictable tools with predictable results. Relying fully on reactive tooling that is built ad-hoc on known risks is as efficient as repetitive risks are. In reality, they’re not, so you play a catch-up game all the time.

In practice, the designs are rarely 100% Navy or 100% NASA. But when design choices are being made - it makes sense to look at the risks when choosing the design style - what exactly are you engineering against? Can it be modelled precisely via a simple equation or managing complexity requires real-time resource allocation rather than pre-designing things?