As any proper security panic, Log4j made everyone think of many things. Open-source economics, supply chain risks, interconnected risk bearing… kk.org’s Kevin Kelly wrote a blog post that echoes point we’re hearing a lot from senior folk:
“We have to regulate security baseline by regulating each and every device that gets online”. Although not campaigning for certain specific kind of standardisation and regulation, it’s still a strong take. And since it’s well written, it’s interesting to argue against. Read it, KK is an awesome writer to sart with.
The point that struck me is:
“In effect we would treat a security break like a public health breach. Until you can upgrade your devices to the minimum required standard of security, you are quarantined.”
… and it hits the right spot, although from the wrong direction.
I wanted to rant on COVID protocol compliance, but that’s a complicated and charged topic for 2021, so I’ll salt it until dust settles (2025?).
So let’s take a simpler example in public health to discuss: drunk driving. We’ve got a “minimum required security standard” that implies that every driver can be held accountable for their actions. Yet in US, 28 people a day die in DUI crashes (Drunk Driving | NHTSA). This is intentional misbehaviour only. Think of people on the road with sight problems, sleep fatigue and other causes for loss of driver control. They all sum up as “you should not be driving but here you are”.
Talking about information security in the interconnected society, there are a few problems with mandating security baseline from everyone:
You can’t expect people to behave reasonably against probabilistic risks.
Some are insufficiently educated about the technology they operate. Some are not capable of assessing risks properly and making sane risk decisions. Some tend to oversimplify and ignore things because life is hard already. And they will stay online and interconnected with saner, more responsible people.
So, if we want to put in place the whole idea of “regulating everyone’s security baseline”, we need to pull control out of people’s hands to some extent:
Now, these things will only work with vulnerabilities and deficiencies that are known to platform controllers and described in detection and enforcement systems. Even if these systems work, and they get to establish the new fabric of trust, 0 days become instantly more detrimental.
I don’t have a good answer, only observation that obvious solutions to address don’t seem to be attractive. Notions of individual responsibility, real-time risk decisions and zero trust will not satisfy too many souls looking for peace.
As they will be ignored, I only hope that we will live to the point where we will see proposals which address inherent human incapacity of risk judgement more directly, without triggering rejection. Before that, things will look grim.