Investigated, deleted account because Google wrongly received 'sensitive' photos

Investigated, deleted account because Google wrongly received ‘sensitive’ photos

Millions of images with content of exploited or sexually abused children are flagged by tech giants every year.

In 2021, Google alone filed more than 600,000 reports of child abuse material, disabling the accounts of more than 270,000 users.

The first tool used by the tech industry to detect child pornography was PhotoDNA, released by Microsoft in 2009.

This is a large-scale photo database used by many technology companies such as Facebook to match to detect infringing photos even when “shredded”.

The bigger breakthrough came in 2018, when Google launched its own AI-based tool to detect large numbers of photos with bad content about children in a short time, even if they have not been posted online.

Not only did it find images of known abused children, it also found unknown victims and offered to the authorities for rescue. Facebook later used the same tool.

With uploaded images like Mark’s and Cassio’s, Google’s system immediately analyzes them.

According to Jon Callas, technology expert at the Digital Civil Liberties Foundation Electronic Frontier Foundation, the process is called Intrusive Scanning.

“This is exactly the nightmare we’re all concerned about. They scan your family photo album, then you get in trouble,” Callas said.