The social network has developed a new technology …

The moderation of content on Facebook remains the main battleground of the social network. On Wednesday, the Web giant announced that last semester, no less than 8.7 million child pornography images or unpopular child abuse platform rules had been removed.

This far-reaching moderation was achieved thanks to a brand new technology acquired by Facebook. Until now, the network used software that identified child pornography that had already been identified in the past and blocked their new publication. For others, Mark Zuckerberg’s company relied on user reporting.

An artificial intelligence to detect nudity

“In addition to this photo matching technology, we use artificial intelligence to detect child nudity and child pornography unknown to our services when it is downloaded,” wrote Facebook in its release . This artificial intelligence must allow Facebook to quickly identify these contents so that they are deleted and reported.

The social network said that 99% of the 8.7 million content had been removed before anyone reported it. “Our community standards prohibit the exploitation of children and, to avoid any risk of abuse, we also act on non-sexual content, such as innocuous photos of children in the bath. “

The platform also worked in collaboration with the National Center for Missing and Exploited Children (NCMEC), as well as security experts and associations fighting against child exploitation.