Child sexual abuse imagery (CSAM) is one of the major issues that has plagued many social media giants, including Twitter. And despite Elon Musk declaring that combating CSAM is a top priority for Twitter, a recent report from The New York Times suggests that CSAM is still prevalent and is even being promoted by Twitter’s algorithms.
The New York Times, in partnership with the Canadian Centre for Child Protection, discovered multiple cases of child abuse imagery on multiple Twitter accounts, with one video getting over 120,000 views. While the Centre’s findings were even more concerning, as they found 260 of the most graphic videos in their database, which received a total of 174,000 likes and 63,000 retweets.
Twitter actively promoting CSAM
Despite Twitter’s claims of suspending over 404,000 accounts involved in creating, distributing, or engaging with this content, the report suggests that the platform was actually promoting some of these images through its recommendation algorithm and only took the images down after being notified by the Centre. This raises some serious questions about the platform’s ability to monitor its platform and effectively remove illegal and harmful content.
Lloyd Richardson, the Canadian centre’s technology director, said that “The volume [of CSAM] we’re able to find with a minimal amount of effort is quite significant, and it shouldn’t be the job of external people to find this sort of content sitting on their system.”
Earlier this month, Twitter announced its plans to “proactively and severely limit the reach” of child sexual abuse imagery (CSAM) and remove such content, along with suspending the offending accounts. However, the recent introduction of a “General Amnesty” policy by CEO Elon Musk, which relaxes enforcement against rule-breaking accounts and the downsizing of the company’s trust and safety staff, which is responsible for content moderation, has raised serious concerns about the company’s commitment to combating CSAM.
Discussion about this post