Last Thursday, Apple (AAPL  ) unveiled controversial plans to scan U.S. iPhones to root out images of child sexual abuse (CSAM), in addition to other equally contentious changes planned for its Messages app.

The new update is due out later this year and marks the first in a number of steps Apple plans to take to stop the flow of child pornography, and by extension, improve the company's fraught relationship with law enforcement.

Child safety groups have lauded the move. "Apple's expanded protection for children is a game-changer," said John Clark, president, and CEO of the National Center for Missing and Exploited Children (NMEC), in a statement last Thursday. "The reality is that privacy and child protection can coexist."

Privacy hawks remain unconvinced.

Changes to iMessages include new parental controls, which filter out explicit texts and alert parents if their child chooses to view one of these images. Apple notes that the new alert features will be off-limits for parents of older teens.

But the deeper controversy surrounds Apple's plans to scan through photos synced to the cloud. Under the forthcoming system, software called NeuralHash will break up any images uploaded to iCloud into strings of numbers, called hashes.

This data, not the image itself, is then compared against an NMEC database containing known images of child exploitation. If a match is found, a "safety voucher" is generated. Generate enough vouchers, and a human moderator steps, decrypting the images. And if the pictures contain CSAM, then the appropriate authorities are alerted.

Apple notes that the new system does not work on videos. It also stresses that no scanning will take place unless photos are synced to the cloud. "If users are not using iCloud Photos, NeuralHash will not run and will not generate any vouchers," Apple privacy head Erik Neuenschwander told TechCrunch.

The company contends that the pending changes add to, not subtract from, its reputation for privacy.

Underpinning Apple's arguments is the fact that scanning will be done on the phone itself rather than in the cloud, which allows vouchers and any data to remain encrypted. Therefore, no one should be combing through photos of your newborn unless they get a kick out of strings of random numbers. Decryption only happens if a certain threshold is crossed, and Apple claims that false-positive results have only one in a trillion chance of happening. Since there aren't a trillion iPhones, the innocent can rest easy, right?

Critics contend that because the forthcoming features use a local process, Apple is thereby creating a backdoor through the iPhone's encryption that anyone could use, from repressive governments to blackhat hackers. Ironically this is the mirror opposite of Apple's central argument.

"Now Apple has demonstrated that they can build a surveillance system, for very specific purposes, that works with iMessage," Matthew Green, associate professor of computer science at Johns Hopkins University, told the Wall Street Journal. "I wonder how long they'll be able to hold out from the Chinese government?"

Apple says that the new system is incapable of mass surveillance or directly scanning a user's iPhone.

"We have faced demands to build and deploy government-mandated changes that degrade the privacy of users before, and have steadfastly refused those demands. We will continue to refuse them in the future," said Apple in a statement. "Let us be clear, this technology is limited to detecting CSAM stored in iCloud and we will not accede to any government's request to expand it."