Page 1 of 1

Problems with CSAM Detection

Posted: Wed Apr 23, 2025 4:05 am
by bitheerani42135
Potential criticism of Apple’s actions falls into two categories: questioning the company’s approach and examining iceland mobile database in the protocol. There’s currently little concrete evidence that Apple made a technical error (a problem we’ll discuss in more detail below), though there’s been no shortage of general complaints.

For example, the Electronic Frontier Foundation has outlined these issues in detail. According to the EFF, by adding image scanning to the user, Apple is essentially building a “backdoor” into users’ devices. The EFF has been criticizing the concept since 2019.

Why is this bad? Well, consider having a device on which data is fully encrypted (as Apple claims) that then starts reporting to third parties about that content. Right now, the target is child pornography, leading to a common refrain: “If you’re not doing anything wrong, you have nothing to worry about,” but while this mechanism exists, we can’t know that it won’t work if applied to other content.

Ultimately, this critique is more political than technological. The problem lies in the absence of a social contract that balances security and privacy. All of us—from bureaucrats, device manufacturers, and software developers to human rights activists and everyday users—are trying to define that balance right now.

Law enforcement agencies complain that pervasive encryption makes it harder to gather evidence and catch criminals, and that’s understandable. Concerns about mass digital surveillance are also obvious. Opinions, including opinions about Apple’s policies and actions, are a tiny fraction of the universe.

Potential Issues with Implementing CSAM Detection
Once we get past the ethical concerns, we hit some bumpy technological roads. Any program code produces new vulnerabilities. No matter what governments do, what if a cybercriminal took advantage of CSAM detection vulnerabilities? When it comes to data encryption, the concern is natural and valid: if you weaken the protection of information, even if it’s only with good intentions, anyone can exploit the weakness for other purposes.

An independent audit of the CSAM Detection code has just begun and may take a long time. However, we have already learned a few things.

First, the code that makes it possible to compare photos to a “template” has been in iOS (and macOS) since version 14.3. It’s entirely possible that the code is part of CSAM detection. Utilities for testing a search algorithm for matching images have already found some collisions. For example, according to Apple’s NeuralHash algorithm, the two images below have the same hash:

According to Apple's NeuralHash algorithm, these two photos are a match
According to Apple's NeuralHash algorithm, these two photos are a match

If it were possible to scrape the database of illegal photo hashes, it would be possible to create “innocent” images that trigger an alert, meaning Apple could receive enough false alerts to make CSAM Detection untenable. This is likely why Apple separated the detection, with part of the algorithm working only on the server output.

There’s also this analysis of Apple’s Private Set Intersection Protocol . The complaint is essentially that even before the alert threshold is reached, the PSI system transfers a bit of information to Apple’s servers. The article describes a scenario in which law enforcement agencies request Apple’s data, and suggests that even false alerts could lead to a police visit.

For now, the above are just the initial tests of an external review of CSAM Detection. Its success will depend largely on the famously secretive company providing transparency into how CSAM Detection works—and in particular, its source code.