With all gloom’n’doom scenarios about CSAM misuse have been pointed out and outrage about change in value proposition is heating up, I am missing one interesting angle in the Apple CSAM discussion:
Implementation of any security control/technology is an exercise in security budget allocation. What is the expected change in security posture/risk resistance and how much it will cost? Are there more important risks we can address with equal budget? In other words, is this the best thing you can do right know among known things to do?
Apple has put its money, which comes from customers buying their products and services, into technology that puts all customers at more risk. This technology might protect some vulnerable customer groups, but this is yet to be proven.
With little I know, CSAM as a technology:
- does not address root cause (child abuse)
- does not enable detecting incidents in motion (acts of child abuse)
- does have questionable impact even if it did, because when CSAM will have enough publicity, criminals would avoid CSAM-enriched devices and their CSAM-penetrable photo stores, so the technology is self-defeating (“evasion reaction”).
CSAM enables one narrow use-case: matching stored images against The Blacklist, which contains the evidence of previous illicit activities as collected, classified and then delivered to Apple for matching. This use-case is useful for talking to law enforcement and showing compliance, but is there any real impact? Idea that potential abusers will first consume known abusive imagery, then switch to do something worse seems as sound as “marijuana the gateway drug” theory and “jail all drug users” theory to me. Neither is true nor worked to stop drug crime, as far as I know.
So why would Apple try to invest a lot of money into increasing attack surface on it’s users to protect (with unknown efficiency) some small fraction of it’s audience?
- Social signalling: this is straight agrenda-signalling - “we are making baby steps to do the right thing, even though it hurts one of core value propositions of our business”. There are commercial reasons for that, there are social reasons for that, child protection as security weakening plot device has been there for a while. If this is the case - we are to see more crazy ideas like this, and we are to see inevitable detriment of Apple as an innovator in consumer technology, because the innovation is now pointed at causes outside of any meaningful feedback loop.
- Proactive compliance operation: Apple believes it will be bent into “put backdoors in all end-to-end encrypted data exchange and let law enforcement into every Apple’s privacy feature” and Apple needs to create precedent with the law enforcement of “being compliant without directly exposing customer data”. Making law enforcement bring in incriminating evidence first and making Apple only answer “this user has N pieces of this evidence” sounds like least hurtful outcome for everyone. If Apple anticipates itself to be bent into compliance, this is good evasion technique that will protect privacy in the long-term, where everyone else will be bent into direct compliance (law enforcement directly accessing the customer data) and Apple will be king with it’s “we work on basis of law enforcement bringing in incriminating evidence first”.
- They are mad: Many leading companies have gone mad in the past, so why wouldn’t the most expensive company in the world today go mad to free up space for someone else?
- I am mad: I don’t understand mechanics well enough and CSAM will enable law enforcement to collect proof of abuse acts that just happened, or are happening in progress. Makes a bit more sense, I’d be delighted to know more, but remember “evasion reaction” concern?
My hopes are, of course, for the version 2, because that is smart move for the better of Apple’s customers in the long run. However, CSAM is still a problem from a very different angle I’ve started with:
CSAM is huge body work that costs thousands of hours of human labor.
At the same time, Apple has visible problems with regular platform/application security work. Long enough. With (unproven, but well-suspected) track record of actual deaths as a result. Does Apple deal with causes of platform weaknesses that are in their control? We don’t know, but evidence of their failure is there.
Does Apple invest security personnel time and money into increasing attack surface on every Apple user with CSAM, “Find My” Bluetooth network? Definitely yes.
It is very incorrect to compare the two activities directly (as they require different personnel to operate and are not mutually exclusive), but:
- attack surface on every Apple user has been increased
- NSO group is still active
- it is not unlikely that spyware operations will cause more deaths given the target lists
- we are yet to see first arrests and impact of CSAM technique
- it is not the first time lately Apple is building security-related technologies that increase attack surface instead of working on known problems: remember “Find My” mesh issues?
We live in very interesting times.