Meta, parent company of Facebook, Instagram, and Twitter, has responded to an article from the Wall Street Journal from Friday, which claimed that the social media giant struggled to remove child predators, child exploitation, and content from its platforms.
The Journal reported that despite Meta’s June task force to ensure child safety, “Instagram algorithms still connected a web devoted accounts dedicated to the creation, purchase, and trading of underage sex content.”
According to the article, tests conducted by both the Journal and the Canadian Centre for Child Protection have shown that Meta’s recommendation system still promotes such content. Although the company has removed hashtags that were pedophilic, its systems still recommend similar ones. Meta has not always removed problematic accounts or user groups when it is alerted.
Meta was quick to deny any liability for sharing child explicit material, and to point out that they have taken numerous steps to remove, reduce, or eliminate this content from their social media sites. Meta listed a few of these actions in a blog post.
Meta posted a statement to its website saying the company has created a taskforce to review policies, examine technology, and enforce existing laws. The group would also make changes to strengthen protections for youth, ban predators, and remove networks they used to communicate with each other. The task force immediately took steps to improve our protections, and our child safety team continues to work on other measures.
A Meta spokesperson released a separate press release on the topic of child exploitation.
In a press release, a Meta spokesperson stated “Child abuse is a horrific crime and the online predators who do it are criminals with a purpose.” They use many apps and websites to test the defenses of each platform and they adapt quickly. “We work hard to keep ahead.”
The spokesperson said: “We hire specialists to ensure online child safety. We develop new technology which weeds out predators and we share our findings with law enforcement and other companies.” The task force we established earlier this year is actively implementing the changes.
The Journal detailed efforts to expose the “disturbing sexual content” of children on various forums.
Facebook’s algorithms suggested other groups, such as “Beautiful Boys,” “Little Girls” and “Young Teens Only” to Journal test accounts who viewed public Facebook group discussions containing disturbing discussions about children during the past five-month period. These groups are populated by users who discuss children sexually, share links to alleged abuse content, and host private chats. This is often done via Meta’s Messenger or WhatsApp platform.
The Journal reported that researchers at Stanford Internet Observatory have found “that when Meta removes an Instagram or Facebook hashtag it believes to be related to pedophilia its system fails to detect and sometimes even propose new ones with minor variants. Meta’s search recommendations recommended that anyone typing #Pxdobait add a specific emoji to the end.