Apple recently announced plans to begin scanning user's devices and iCloud accounts for images of child abuse. A plan that, on the face of it, no-one could have a reasonable objection to. So why are many privacy campaigners and mainstream companies including Facebook uneasy about the proposal?
Shortly after the inital news hit, various commentators explained various implications of the plan, including the head of WhatsApp. Apple then had to release a number of follow up press releases to explain more about how they're planning to limit the scope of technology they're planning to deploy here.
What's changing?
There's two separate features being added. The first will apply to everyone using Apple devices and iCloud. When a user goes to upload an image to iCloud, it will be compared against a database of known image files that include child sexual abuse. That list is stored on the user's device. It does not store the images themselves for comparison, but what is known as a "hash" - a text representation of the contents of a file that cannot be converted back into an image. If the user tries to upload files that match, a manual review will be conducted, and the user will be reported to the relevant authorities if that review confirms what the automated systems suspect.
The second separate protection being added here is intended specfically for child accounts on Apple devices. It will blur messages that contain any images that seem to contain sexually explicit material (either sent or received) with messages alerting the child to this and, optionally, also send a message to their parents if they choose to view it. Note that the detection here is not the same as in the first case. This is based on AI detection of the contents of the image rather than comparing it to known images. No information is shared with any third parties here - the process is entirely automated. It is also unrelated to process to pass information on the authorities outlined above. The information about what has been sent / received / detected never leaves the device.
What's wrong with that?
Well, with the plan as outlined, absolutely nothing. Assuming it all works as intended, it will be a commendable step towards using modern technologies to protect children. However...
As with any new technology, it's always a good idea to consider what it is capable of in its entirety as well as what it is being used for in the first case, however correct that application may be (and that is without question in this case). Here, attention is turning to the first proposal above. In the proposed solution, the database of images that will cause an account to be flagged if matched will be provided by various child protection agencies. However, once the technology is in place, there is no technical reason this couldn't be expanded to other categories as long as that content matches some known existing content to which it can be compared. Should uploads also be being scanned for files with known terrorist content? Extreme violence? What about material that is critical of the government of that country, or contains references to activities that have been banned in some repressive regimes?
This is not dissimilar to the ongoing desire of various governments around the world to break encryption on various popular messaging services like WhatsApp so the messages can be read by them. Again, child abuse and terrorism is given as the reason this should be done, and that they would use this power only in that limited capacity. Maybe it would be, although it's certainly possible to imagine some governments around the world that might abuse this power right now. And what about future governments? You may trust the people in power with your data now, but it's not impossible that might change in the future. As soon as that backdoor into your messages is added, there will be no putting that genie back in the bottle if things change in the future.
So in the case of this content comparison feature, once the system is in place, the fear is that governments around the world will apply pressure to extend that database of content that will alert the authorities in various new directions depending on the country. Apple's reponse to this currently is "Well, we'll just say no". However, Apple have bowed to government pressure before. It has removed many apps from its app store in China, and has removed FaceTime from devices in countries that don't allow encrypted calls. What if one of these countries threaten to ban the sale of iPhones in their country unless they allow them to add content of their choosing to this "banned" list?
As no information is shared with third parties, the AI-enabled message scanning is less of an immediate privacy concern. However, it isn't beyond the realms of possibility that governments could request "tweaks" to the AI to censor various other types of content they may not agree with. They may not know who is trying to share the content in this case, but they could make it more difficult for that content to be shared. There is also a technical aside on this one - this feature will depend on the accuracy of the AI. These images are being analysed in real time to determine if they include sexually explicit material. A high number of false positives (or false negatives) may yet cause issues for Apple here.
What's the answer?
As ever, it's very complicated, and we certainly can't come down firmly on one side or the other. It is impossible to argue against what Apple is trying to do here in the limited form they are proposing it - that much isn't complicated at all. If there were a way to absolutely guarantee that the technology could only ever be used for this purpose, then there would be no issue here at all. However, experience has suggested that one should sometimes be wary if the only protection against unwanted extensions on the reach of new technologies is a single company saying "just trust us" when they are ultimately answerable to no-one but themselves and their shareholders. Political and / or market forces could provide powerful motivation to backtrack in the future. Is the undeniable gain in the first case here worth the risk if this same technology is abused by governments in the future? It seems we will find out the answer in the coming months and years.