Twitter Is Failing To Remove Images Of Child Pornography From Platform

0
217

Twitter has failed to remove images of child sexual abuse over recent months—even though they were flagged as such, a new report will allege this week.

Researchers at the Stanford Internet Observatory say the company failed to deal with 40 items of child sexual abuse material (CSAM) over a period of two months between March and May this year.

The team used Twitter’s API to gather the metadata of 100,000 tweets, then scanned it for images containing CSAM using Microsoft’s PhotoDNA tool. PhotoDNA automatically hashes images and compares them with known illegal images of minors held on the National Center for Missing & Exploited Children (NCMEC)—and highlighted 40 matches.

“The investigation discovered problems with Twitter’s CSAM detection mechanisms and we reported this issue to NCMEC in April, but the problem continued,” says the team.

“Having no remaining Trust and Safety contacts at Twitter, we approached a third-party intermediary to arrange a briefing. Twitter was informed of the problem, and the issue appears to have been resolved as of May 20.”

Research such as this is about to become far harder—or at any rate far more expensive—following Elon Musk’s decision to start charging $42,000 per month for its previously free API. The Stanford Internet Observatory, indeed, has recently been forced to stop using the enterprise-level of the tool; the free version is said to provide read-only access, and there are concerns that researchers will be forced to delete data that was previously collected under agreement.

And it’s long been a thorn in Twitter’s side, after highlighting disinformation on the platform during the 2020 U.S. elections. Musk called it a “propaganda machine” at the time.

The researchers say that Twitter is far from the only offender, with more results from the research due to be revealed later this week in the Wall Street Journal.

“Twitter is by no means the only platform dealing with CSAM, nor are they the primary focus of our upcoming report,” they say. Regardless, we’re glad to have contributed to improving child safety on Twitter, and thank them for their help in remediating this issue.”

In January, Twitter Safety claimed that it was “moving faster than ever” to remove CSAM. It said that it had suspended around 404,000 accounts that month for creating or engaging with material involving CSAM, up by 112 per cent.

However, since then, several reports have indicated that CSAM is still rife on the platform. In February, the New York Times reported that after Elon Musk’s takeover, Twitter had been taking twice as long to remove CSAM flagged by child safety organizations.

The company still responds to all press enquiries with a poop emoji.