On Thursday, Facebook announced that it is going to use “updated machine learning” algorithms in order to better spot and counter misinformation on its platform. The company says it will use its existing third-party fact checkers to review stories that the new algorithm flags, and their reports may be shown below flagged stories in a section called Related Articles.
The Related Articles feature—a list of suggested links offering varying perspectives—is technically not new. Facebook started publicly testing the feature in April, but now the company is rolling out the feature more widely in the US, Germany, France and the Netherlands, TechCrunch reported on Thursday. These are countries where Facebook already has fact checking partnerships in place.
Facebook says its objective with Related Articles and updated machine learning tech is to offer users more context on the validity of a story they see in their feed. The company aims to help users make better judgement calls as to whether or not they should believe a potential hoax, or share it to their network.
But it’s also just another way for Facebook to continue acting like a news outlet for billions of users without directly accepting any journalistic responsibility.
“We don’t want to be and are not the arbiters of the truth,” Facebook News Feed integrity product manager Tessa Lyons told TechCrunch. “The fact checkers can give the signal of whether a story is true or false.”
But while Facebook does not want to be seen as the authority over what stories are permitted on its platform, it is. By delegating the subjective work to non-Facebook employees and leaning on machine learning technology, Facebook still gets to wield its influence as an editorial outlet without being labeled as one. And if any mistakes are made—like if a politically charged story is wrongly flagged as a hoax, or if Facebook accidentally recommends fake news—Facebook can now more easily shift blame to a glitch or a third-party.
Facebook hasn’t shared why its updated machine learning algorithm is now more capable than it once was—or if a previous version was ever widely in use on users’ Newsfeeds before today. But the company is seemingly hell-bent on trying to fix its misinformation problem (the one Mark Zuckerberg once brushed off) out in the open. The shady part is, Facebook wants to do this without being held directly responsible for how it handles a potential hoax. Facebook isn’t calling the shots, the machines are.