Pornhub chatbot blocks millions from searching for child abuse videos

Millions of people searching for child abuse videos on Pornhub UK have been disrupted over the past two years. Each of the 4.4 million times someone entered a word or phrase associated with abuse, a warning message blocked the page, saying such content was illegal. In half of the cases, the chatbot also points people to where they can seek help.

Pornhub deployed the warning messages and chatbot as part of a pilot program in partnership with two UK child protection organizations to see if small-scale intervention could nudge people away from seeking out illegal content. A new report analyzing the test shared exclusively with WIRED says the pop-ups led to a decrease in searches for child sexual abuse material (CSAM) and saw many people seeking support for their behavior.

“The actual raw search numbers are actually scary high,” said Joel Scanlan, a senior lecturer at the University of Tasmania who led the reThink chatbot evaluation. During the multi-year trial, searches related to CSAM on Pornhub’s UK site generated 4,400,960 warnings – 99% of all searches during the trial did not trigger a warning. “The intervention time for searches is significantly reduced,” Scanlan said. “So the deterrence message does work.”

Millions of CSAM images and videos are discovered and removed from the web every year. They are shared on social media, traded in private chats, sold on the dark web, or in some cases uploaded to legal porn sites. Tech and porn companies do not allow illegal content on their platforms, although they remove it with varying degrees of effectiveness. Pornhub removed approximately 10 million videos in 2020 in an attempt to eliminate child abuse and other problematic content from its site. New York Times Report.

A spokesman for Pornhub’s parent company, Aylo (formerly MindGeek), said the company uses a list of 34,000 banned terms, spanning multiple languages ​​and millions of combinations, to block searches for child abuse content. A spokesman said this was one way Pornhub was trying to crack down on illegal content and part of the company’s efforts to keep users safe after years of being accused of hosting child exploitation and non-consensual videos. When a Brit searches for any term on Pornhub’s list, a warning message and chatbot appear.

The chatbot was designed and created by the Internet Watch Foundation (IWF) and the Lucy Faithfull Foundation, a charity dedicated to preventing child sexual abuse. IWF is a non-profit organization that removes CSAM from the web. It appeared together with the warning message a total of 2.8 million times. The trial counted sessions on Pornhub, which could mean people were counted multiple times, and it was not intended to identify individuals. The report said searches for CSAM on Pornhub were “significantly reduced” and that chatbots and warning messages appeared to be at least “partially” effective.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *