ivychapel.ink

On keeping secrets in comfort

Oct 24, 2016 Tags: #security #risk

As someone who used to spend a lot of time analyzing how security incidents happen, I frequently think how little, unfortunately, technical peculiarities have to do with most security breaches.

Making people keep secrets is very hard. Making groups follow guidelines is laborious. Making secrecy policies work in groups is even harder.
You can fail only once .

To prevent OPSEC failures, we’re constructing hi-tech clutches to human problems, frequently without trying to understand the nature of the problem. Enforcing good behavior technically works, but lack of understanding of true nature of underlying failures leaves a lot of blank spaces in our models.

This post is an attempt to approach these problems from a somewhat unusual angle: human behavior.

Secrecy failure: a simple scenario

Let’s conduct a small mental experiment on how evolution (degradation) of secrecy happens within a small group:

Step 1. Involuntary traitor

Set up

We’ve set our communications channels, users exchanged keys correctly and right now everyone’s fine.

Based on (rather paranoid) policy, to communicate to his secretive peers, users has to type keychain password on e-mail client’s start and key password when encrypting/decrypting the message.

Add comfort

To operate efficiently, the user requires convenient e-mail app across all platforms. More devices — bigger attack surface. Risk grows, but not drastically. User is still compliant to the policy.

Add some real-world circumstances

Now, User X goes on holiday and takes pictures of nice landscapes with his cellphone and his wife asks him to send her the pictures. Over e-mail to her regular Gmail account.

To succeed, user X, who is a secret group member:

Which is basically the same outcome — user X gets introduced to a workflow where everything is simpler, you don’t have to input keychain password, you don’t have to limit yourself to counterparts within secrecy groups.

Or, in much simpler and more realistic case, User X forgets to tick the ‘encrypt’ checkbox, and User Y accepts and continues communication.

This is where the fun starts.

Step 2. Induction

Now, let’s expand our mental experiment and see how this phenomenon spreads to the group.

It’s typical for groups to slightly lower their security standards when some minor part of the group lowers them. If it’s gradual degradation rather than full neglect, it could go unnoticed.

Lately, when reading N.N. Taleb’s “The Most Intolerant Wins” , which partially inspired this post, some of the ideas felt really familiar: once there’s sufficient amount of people rejecting secrecy, the policy goes bust and, at best, becomes a deceptive formality.

It happens because if User X with low discipline will fail to comply to secrecy sooner or later and will give it up completely, User Y, without such discipline problems with his own behavior will still have to give up the discipline to talk to User X.

Not only such misconduct is contagious (being bad example to others), at some point even disciplined members will have to give up secrecy to contact non-disciplined ones or excommunicate them.

What happens is just more subtle, less visible version of ‘ what the hell effect ’, where a conscious decision to give up security regime never happened: people would just convert one-by-one, using an inverse version of network effect .

What introduced the problem? I believe, availability of option in the first place. Whenever the choice is available, sooner or later this choice will be made in the least conscious fashion.

Step N. Failure.

One way or another, the group will break secrecy bad enough to get uncovered by adversaries. In case of this man with a fine moustache, it ain’t no good.

Behind the scenes

Fixing human behavior with more flexible or more friendly technology does not help, security-wise. Enforcing consistent behavior is nice when you’re drug kingpin and everyone is afraid of you.

What can we do about it? Understand the problems first.

Flexibility problem

Doesn’t it feel that having both encrypted and regular e-mail is more flexible? It is so, in absolutely cold rational terms.

But tired brain knows no choice, it jumps to default, lazy behavior:

Usability “problem”

Security industry mythology suggests that security impairs usability and ease of use. Specifically when human actions are required to keep the security protocol consistent.

I don’t think there’s anything wrong with having lesser “usability” in some cases. Cognitive strain for non-silky-smooth interaction induces slower, more attentive thinking patterns and is generally good for maintaining consistent behavior.

Still, there’s a level of anti-usability that is an impediment to doing things well, which gets security controls in regular applications circumvented. So that’s…

… why we need dedicated tools

Dedicated secure messengers and file-sharing tools is a simple answer to the problem:

They do deliver their guarantees, at cost of some usability (which, as I suggested previously, is not that bad effect if it doesn’t ruin the experience totally) and yet another app sitting on your phone / desktop.

Managing user behavior

Security and secrecy have more to do with human behavior than with engineering. There always will be vulnerabilities, stronger and weaker communication protocols, and so on. Yet, somehow, a significant portion of security incidents do not require the adversary to possess some unique knowledge — just basic understanding of human beings and a bunch of outdated CVEs.

Addressing these risks relies on understanding that human behavior is far from being rational. When we’re looking at a group, simple errors of human decision-making amplify to disastrous effects. Based on my experience, no amount of education and awareness prevents these effects,- but clever choice architecture and constant behavior monitoring do.

If you want one TL;DR you want — take this:

Users of your system might be smart, trained and motivated. But at some point in time, they will be tired, their conscious decision making resources depleted, and they will fail the secrecy regime. Unless you design your security system to be strong with lazy thinkers and distracted minds.