At the end of 2016, Facebook delegated its misinformation problem to outside experts, but it looks like fact-checkers tasked with weeding out bullshit are largely dissatisfied with how things are going down.
A new report released Wednesday by the Tow Center for Digital Journalism explores Facebook’s partnership with its fact-checking organizations, revealing a number of issues fact-checkers have with the social network’s existing process.
Facebook is currently outsourcing to five fact-checking partners: ABC News, Associated Press, FactCheck.org, PolitiFact, and Snopes. From August 2017 through January of this year, Tow Center fellow Mike Ananny conducted interviews with “six senior people” who work at four of the five partner organizations, as well as two outside people who had knowledge of how the fact-checking effort operates. Through his discussions, Ananny discovered fact-checkers wanted more transparency around the tools they use as well as the process.
“It would be great if they put out a report. We’ve been doing this for a year. They’re in the best position to know how effective it is,” one source told Ananny. “We need more transparency.”
It’s not a revelation that Facebook’s third-party fact-checkers have gripes with the partnership—they cast skepticism over it last year. But this report offers a more comprehensive look into their discontent over the process that Facebook has touted as one of its most important lines of defense in handling the fake news problem.
According to Ananny’s research, this is what goes into Facebook’s fact-checking:
Here’s how it works: through a proprietary process that mixes algorithmic and human intervention, Facebook identifies candidate stories; these stories are then served to the five news and fact-checking partners through a partners-only dashboard that ranks stories according to popularity. Partners independently choose stories from the dashboard, do their usual fact-checking work, and append their fact-checks to the stories’ entries in the dashboards. Facebook uses these fact-checks to adjust whether and how it shows potentially false stories to its users.
Ananny found that fact-checkers had little information and no input into the design of the dashboard they use to identify stories. (Facebook created the tool without including feedback from the fact-checkers, according to Ananny’s reporting.) Some of Ananny’s sources expresses scepticism about what the “popularity” metric meant. “We’ve asked them a hundred ways to Sunday what popularity means,” a source told Ananny. “We don’t know the mechanism they use to determined popularity.” Another partner told Ananny that they believed the entire fact-checking system was a means to “keeping people on Facebook, to make sure they look to the platform for context.”
I also heard scepticism about Facebook’s priorities, especially that the platform was using news and fact-checking organisations as “cheap and effective PR [public relations]” and that this was another step in its experiment-driven culture.
According to one partner, there were currently about 2,200 to 2,300 stories listed in the dashboard, but that “about 75 percent of them seem to be duplicates.”
What’s more, a number of partners were sceptical about which stories appeared in the dashboard due to the types of sources they were encountering. “We don’t see mainstream media appearing [in the dashboard]—is it being filtered out?” a source said, adding that fact-checkers aren’t seeing “major conspiracy theories or conservative media” on the list, including InfoWars.
A source also characterised the dashboard as “very word-based,” and nearly every partner interviewed wanted it to include visual content, such as memes, photos, and videos.
“We should be doing work on memes,” a partner said. “The partnership doesn’t address memes, just stories. We’ve had these conversations with Facebook; it’s something they say they want to do but haven’t done it.”
The report also indicates that fact-checkers wanted more transparency from Facebook on their impact, but that they were largely kept in the dark information beyond data available through the dashboard. “I don’t have much back and forth,” a partner said. “I don’t really hear from them.” According to the report, dissatisfaction prompted “several partners” to team up to chat about their gripes with the partnership to present to Facebook.
What’s clear from this report is that at least some experts tasked with leading the charge against Facebook’s fake news problems aren’t happy, and Facebook doesn’t seem to be listening. While there are certainly successes in the partnership—it’s not a total failure—it is still seemingly a weak line of defence in its given form. And if Facebook doesn’t take that seriously, it’s hard not to see its commitment to fighting fake news as much more than lip service.
After we asked them for comment, Facebook responded with an expansive email, pushing back against a wide variety of details laid out in Ananny’s report. Specifically, the company denied that it intentionally excludes stories from its fact-checking dashboard if they have the potential to generate large amounts of ad revenue, as some sources speculated. Facebook also says it is not filtering out certain sources.
The company added that it provides monthly phone calls for fact-checkers to engage with one another in response to claims by sources that there is little collaboration between partners, and that it pays all its fact-checking partners. [CJR]