Twitter has struggled to rein in harassment on its platform for years, but in January, the company pledged to finally get serious about the problem. “We didn’t move fast enough last year; now we’re thinking about progress in days and hours, not weeks and months,” Ed Ho, Twitter’s general manager of consumer product and engineering, promised. Twitter has rolled out a slew of updates since then, designed to stifle abusive behaviour. But the company has been quiet about how often it takes action when accounts are reported for abuse, and reporting by BuzzFeed revealed that harassing tweets often enjoy a long and happy life on the platform.
Twitter’s biannual transparency report, released today, reveals for the first time how often the company takes action on reports of abusive behaviour; however, since the transparency report focuses on government requests for information and takedowns, it only reveals statistics for harassment reported by government officials.
Between January and June of this year, Twitter received 16,414 reports of abusive behaviour involving 6,299 accounts. The reports came from people who the company refers to as “individuals identifiable as government representatives.” Twitter took action against just 12 per cent of those accounts.
These abusive behaviour reports could come from a politician’s official account—like Donald Trump's @POTUS account, for instance—or from a city council member’s personal account. At Twitter, ‘abusive behaviour’ is a catchall that includes harassment and violent threats as well as impersonation and other violations of the site’s terms of service.
Political accounts are often the target of violent threats, and a 12 per cent action rate doesn’t look great for a company that has promised to finally get harassment under control.
“Abusive behaviour-related submissions accounted for over 98% of the TOS reports we received from government representatives around the world,” Twitter says in its transparency report. “The majority of the reported content was removed for violating rules under these areas: harassment (37%), hateful conduct (35%), and impersonation (13%).”
Although abusive behavior towards government officials is relatively unchecked, Twitter appears to be doing a better job of eliminating terror content from its platform. The company took action on 92 per cent of the accounts that government officials reported for promoting terrorism.
Over the last year, Twitter has increased its efforts to eliminate extremist content, and the company is claiming a fascinating victory—it says that, of the nearly 300,000 terror accounts it removed between January and June, 75 per cent of them were suspended before they ever published a single tweet.
Twitter declined to say exactly how it detects an account’s association with terror content if the account isn’t tweeting, although it’s possible that Twitter looks at likes, follows, account registration metadata, and other signals.
“We are reluctant to share details of how these tools work as we do not want to provide information that could be used to try to avoid detection,” a Twitter spokesperson told Gizmodo. “We can say that these tools enable us to take signals from accounts found to be in violation of our TOS and to work to continuously strengthen and refine the combinations of signals that can accurately surface accounts that may be similar.”
Since August 2015, Twitter says it has suspended nearly one million terrorist accounts. In the six-month reporting period covered by the transparency report, Twitter says it didn’t need to rely on reports from government officials to detect and combat terror content. “Government requests accounted for less than 1% of account suspensions for the promotion of terrorism during the first half of this year. Instead, 95% of these account suspensions were the result of our internal efforts to combat this content with proprietary tools,” the company said in the report.