Facebook Reaches £42 Million Settlement With Its Hard-Hit Moderators

By Shoshana Wodinsky on at

Finally, some good news for Facebook’s beleaguered moderator community. According to preliminary settlement records first uncovered by The Verge earlier today, the tech giant has agreed to pay a total of $52 million (£42 million) to its current and former content moderators to compensate them for the massive mental strain that comes with reviewing some of the most traumatising content that the platform regularly reckons with.

The settlement is the culmination of a legal battle between the platform and former moderator Selena Scola, who sued Facebook in the Fall of 2018 on allegations that she’d developed acute PTSD after being forced to regularly review horrific scenes of rape, suicide, and murder during her nine months on the job. Per Scola, the company failed to protect her and other workers across the company’s vast network of contractors for the work they do.

Just about two years later, it looks like Facebook’s begrudgingly agreeing that some sort of compensation is in order. This resulting settlement covers 11,250 collective current and former content moderators. promising each of them a minimum of $1,000 and additional earnings (up to $1,500) in the event that they’re diagnosed with post-traumatic stress or any related conditions. Should they be diagnosed with multiple conditions, they’re entitled to up to $6,000 in compensation.

As the Verge report details, lawyers involved with the case believe that up to half of the moderators involved will qualify for the $1,500, at minimum, and it’s not hard to imagine why. For the past year, the company’s been slammed with multiple reports detailing the ways these workers are paid just above minimum wage to comb through traumatic content, day in and day out.

And for that work, the company offers them fairly little recognition – in a report released earlier today, Facebook detailed the company’s enforcement of community standards and made multiple notes about its souped-up technology that “proactively finds violating content,” but offered no mention at all about the human lives that are harmed when that tech invariably falls short.

Facebook did not immediately return a request for comment and we’ll update this post when we receive a reply.

Featured image: Getty