Long after it became painfully clear that Facebook was being manipulated as a tool to undermine democracy and cause harm, the social media company still struggles to keep its platform free of the kind of garbage specially engineered to effectively warp reality – even as recently as this month. And previous systems it’s employed to combat the pervasiveness of disinformation on its platform just didn’t seem to be cutting it.
The company on Thursday announced a new two-pronged approach to tackling this problem on its site: one new policy is intended to curb repeated abuse, and one is aimed at establishing greater transparency. In regard to the latter, the company said it will share with Page managers the specific kinds of content that violate its Community Standards. Basically, if you run or help run a Page, Facebook starting Thursday will flag most of the content it removes or dings on a new Page Quality tab:
To start, we’re including content removed for policies like hate speech, graphic violence, harassment and bullying, and regulated goods, nudity or sexual activity, and support or praise of people and events that are not allowed to be on Facebook. While this tab provides greater insight into content that was removed or demoted, it is not a comprehensive accounting of all policy violations. For example, we won’t be showing content removals at this time for things like spam, clickbait, or IP violations.
The company also said that it will work to prevent users who maintain multiple Pages from just pivoting an already existing Page to serve the same purpose as one Facebook has deleted. So, say a user who maintains three Pages and one is deleted for violating the site’s hate speech policy. Facebook says it may now delete associated Pages or Groups even if they haven’t expressly violated its standards. Facebook said it will use a “broad set of information” to determine whether to take this action, but we’ve reached out to the company for more information.
Facebook’s new Page Quality tab.
Last month, the New York Times published a detailed report on Facebook’s so-described “rulebooks” intended to help guide its 15,000 global content reviewers on what should and shouldn’t be allowed on the platform. Some of these documents, which were first reported by Motherboard, were often confusing and contradictory and created problems for those responsible for scrubbing Facebook of content that violated its standards. Among the myriad problems presented by this system, the most glaring seemed to be that Facebook simply couldn’t keep up with the moderation necessitated by the sheer volume of content on its platform.
These new policies should, in theory, help it tackle at least some of the misinformation, fake news, or other problematic content that surfaces on its platform. The policy update also comes as Facebook-owned encrypted chat service WhatsApp cracks down on its own misinformation problem; both services have come under pressure to better manage content-sharing on their platforms following viral disinformation campaigns used to incite violence and undermine elections.
Are Facebook’s new policies a step in the right direction? Sure. Will they completely rid the site of its evils? No, and even Mark Zuckerberg has stated that it’s impossible to ever fully purge bad actors from the site. But given that it was having to remove coordinated Russian disinformation campaigns as recently as last week, this is an absolutely necessary step in getting its product under control. [Facebook]
Featured image: Thibault Camus (AP)