ivychapel.ink

Less obvious parts of security asymmetries

Feb 22, 2021 Tags: #security #musings #asymmetry-series

If you’ve been around infosec industry for a while, you might be familiar with an old adage:

defender has to get most (if not all) things right to meet his goals, while attacker has to get a few things right and he wins.

This is unfair, but the closer you look, the more asymmetries unfold: for example, what exactly does “get things right” mean? It’s an interesting mental exercise, so let’s do it together.

When we discuss “setting things up properly” and “getting things right” in a security context, we lack a proper “definition of done” for large-scale security systems and their subcomponents (security controls). Mostly because our decisions are betting against unknown unknowns. We go all the way to turn them into known unknowns by addressing two extremes of the spectrum and filling space in-between: from external risk to internal posture.

Efforts meet somewhere in the middle, and if you’re doing it right - you already have a proper understanding of business risks related to security, made a set of risk management decisions, and are now staring at long lists of “do this and do that” coming from compliance requirements and best industry standards mapped onto your unique architecture and set of constraints.

Unfortunately it does not lead you to a definite answer - “did I get things right”? Security failures after humongous budgets spent on security hint us that getting things right is either very hard or totally impossible.

Vague Definition of Done.

I think the idea that “getting things right is very hard” has something to do with our ability to reason about “utility” in terms of risk-preventing constructions - cryptographic protocols, firewalls, source code analysers, whatever else.

“Utility” can be presented as 3 part system:

Disclaimer: This is simplified way to think about it, but its simplified for a reason:

To simplify, let’s look at a hierarchy of reasoning devices we use to think about “utility” in different contexts:

The intuitive “utility” is straightforward: we have a problem, we seek a solution, and we can intuitively measure whether the problem was solved. The intuitive utility is very handy when you’ve invented the first car in the land where cars have previously never been used. It drives, you say, vroom it does, and everybody claps (but some shrivel in fear). It quickly gets messy - implicit in car’s general utility, what matters is driving safety, ability to start a car in the mountains without internet coverage.

To avoid biasing you into additional naive heuristics, I leave those cars (an exciting invention) aside and take the boring and un-intuitive illustration instead: bare bones, naive loan management system for a tiny bank.

1. Directly measurable utility

For some technological solutions it is easy to reason about their utility.

Suppose you come up with a software that does one thing - “stops old honest jobless ex-criminals from getting bank loans”. Software blinks red light in loan inspector’s office every time bank loan application contains combination of:

Very good, we’ve designed a simple stop-factor system. We can instantly falsify its utility if it’s not good, we can prove that it’s good (because all of the conditions boil down to а process of matching application lines against chosen criteria). By operating our stop-factor application analysis software solution for some time, we realise there are a bunch of problems:

Most consumer-facing inventions like cars, touch-screen mobile phones and levitating hoverboards have intuitively understandable (and provable/falsifiable) utility. Problems start when we treat more complex systems with the same heuristics.

2. Indirectly measurable utility

Now your banking software strives to make a quantum leap, and so does its utility.

You promise to deliver software that:

“allows qualified risk manager to build a system that stops most individual fraudsters and unwanted customers while covering most edge-cases and making it look fair”,

which is a sum of smaller ones:

Sounds easy, eh?

You can emulate all the factors and verification procedures with a certain degree of accuracy as you develop your solution.

But to actually prove that your solution does what it claims, customers have to use it for some time. Customer has to collect the data on decision efficiency (number of people that default or delay payment), but it is already hazy enough: the system depends on “qualified risk manager on the customer’s side” entity. Now that “is it good enough” is co-owned and relies on many factors, testing a happy path of out-of-the-box soluution and illustrating it with 20 scenarios doesn’t prove anything.

Many systems fall into this type of utility. Even some of the security controls fall into this area. But most don’t, because their utility is defined by negative, inverse scenarios.

3. Inversely measurable utility

Now imagine your system makes next quantum leap, or a few, in its utility claims:

Your system “stops most individual fraudsters and unwanted customers with efficient ML system, cheaper than human-assisted system, and with less insider risk”.

How would you falsify it? You can try to do it in several ways.

You can do both and still, in a year, someone will find a way to trick your system and all you will see is half of your portfolio suddenly defaults.

Thus, you don’t really get to reason about utility before it actually fails.

That is why your sales pitch for the software turns into “this product stops most individual fraudsters we can think of and unwanted customers with efficient ML-system, cheaper than human-assisted system in most cases and effective against most risks we currently understand and can model”. That’s a poor sales pitch for The Ultimate Loan Fraud Detection system, right?

So, most of the time you try to mask utility as the first two types of utility with clever wording and working around expectations to push new technology forward or try to solve a problem in a novel way.

Fine, but the claims get less and less falsifiable, responsibility lines are blurred, and the end-result is “we’ve done some work but we’re not sure if it’s good against a skilled adversary”.

The problem with reasoning

Unfortunately, many security systems fall into described pattern - we’re thousands of years of experience in hands-on practice, but very limited in theory and scientific approach so far. When new ciphers, new firewall designs, access control systems are built, we can reject already known “bad patterns definitely known to fail”, but that’s it.

Falsification is hard, as we saw. But the rest is even harder - try to map argument to authority to reasoning above without falling into synonymous fallacy? Decomposing into statements that can be argued back to proven principles is hard already, but having tight, hard to vary decomposition is even harder and we’re yet to see how it includes unknown unknowns.

We’ve not good at it, but we’re very good at spitting intuitive heuristics and being content with it.

We’re facing the universe of unknown unknown risks, and we’re only sure that under certain conditions our system will not fail. Most likely. If basic assumptions are not refuted and a whole new class of computers makes trapdoor functions more penetrable than we’ve previously thought.

This is exactly the case with security of general purpose post-quantum homomorphic cryptography protecting your database and letting your AI search it on blockchain. In a zero-trust environment, of course.

When we try to reason about “did we get this control right”, we typically ascend over the ladder of “utility” representations I’ve suggested above - building a very detailed understanding of conditions and outcomes without being able to validate them in intuitively easy ways, and knowing that any utility you define is relative.

Because there will be unknown unknowns in a world full of threats, and there will be hard-to-falsify utility attributes of your security measures.

Being a security practitioner rather than a scientist, my dream development is more in the field of formal models and formal verification, no matter how funny they look now. But without them, we’re doomed to function in the world of “too much maybe”. And that is maybe too ambiguous to devise security investments without hand-waving and FUD.

But, maybe, I’m wrong and there is an easy way to falsify this train of thought as well?