In its never-ending quest to regain some respect, Facebook has implemented a new system for political ads on its platform that promises greater transparency.
Entities placing a politically-themed ad for the first time will now be asked to provide proof of identity – like a passport or driving licence – and their location, which will go to a third party for verification. This information will be used to add a 'Paid for by' label to the ad, to allow people who see it in their newsfeeds to assess where it came from.
Previous political ads will also be added to a searchable online database, which can be used to look for ads that ran or are running in the UK, US or Brazil in the last seven years. Those are the only three territories where the new system has been implemented so far, primarily as a result of suspected interference in the US election and the UK's EU referendum.
In an official blog post announcing the move, Facebook delineated the topics that will require the new label:
"ads in the UK that reference political figures, political parties, elections, legislation before Parliament and past referenda that are the subject of national debate."
That last one might as well just say "Brexit."
Clicking on the 'Paid for by' tag will take you to the ads archive, where you'll be able to see the spend range for the ad (not a specific figure but a rough indication), how many people have seen it so far, and what other ads are being run by the same people. You don't need a Facebook account to use the archive, which is helpful after so many people deleted their accounts this year.
If you see an ad that you think should have the 'Paid for by' label and it doesn't, you'll be able to report it to Facebook using the three dots in the corner. At which point you'll probably get an instant mail from a mod-bot saying it doesn't break any rules, because that's what Facebook does best.
The social network is quick to point out that this won't solve the problem of foreign interference in elections:
"While we are pleased with the progress we have seen in the countries where we have rolled out the tools, we understand that they will not prevent abuse entirely. We’re up against smart and well-funded adversaries who change their tactics as we spot abuse."
Facebook also recently announced extended policies for combating voter suppression on the platform. Ads offering to buy votes or that misrepresent voting information (like the example below) have been banned since 2016, but the new rules add more types of ads that won't be allowed to run.
Facebook explains the policy thusly:
"We are now banning misrepresentations about how to vote, such as claims that you can vote by text message, and statements about whether a vote will be counted. (e.g. “If you voted in the primary, your vote in the general election won’t count.”)
We’ve also recently introduced a new reporting option on Facebook so that people can let us know if they see voting information that may be incorrect, and have set up dedicated reporting channels for state election authorities so that they can do the same."
That last part suggests they know they won't catch all rogue ads, and indeed the blog post contains an example of a post that would have to go to a third-party for verification: "we’re unable to verify every claim about the conditions of polling places around the world (e.g. “Elementary School Flooded, Polling Location Closed”)". Waiting for the fact-checkers to verify could mean lots of people are falsely put off voting in the meantime, though.
What do you think of Facebook's new policies? A step forward in the fight against fake news, or a too-little-too-late attempt to salvage its reputation? Let us know in the comments. [BBC]