Mishcon de Reya page structure
Site header
Main menu
Main content section

Facebook responds to Oversight Board's recommendations in Trump case

Posted on 10 June 2021

Beyond the headline decision that Donald Trump will be banned from Facebook for two years from the date of his initial suspension, Facebook has unveiled a new protocol whereby public figures may be banned for up to two years during times of civil unrest and ongoing violence. In Mr Trump's case, his actions constituted a "severe violation" of those new rules, meriting the maximum suspension. And from now on, public figures deemed to be in breach of the protocol will face anything from a one-month restriction from creating content that constitutes a new violation, to a two-year suspension, or – in the most extreme cases – a permanent ban.

Importantly, Facebook has made clear it will look outwards for reasons to justify any decision: to "experts" to assess whether the risk to public safety has receded, and to "external factors" including instances of violence, restrictions on peaceful assembly and other markers of global or civil unrest. At the same time, it suggests that public figures (and other users?) will be assessed not just in relation to their content on Facebook, but in relation to their wider behaviour.

Whilst the decision on Mr Trump has grabbed all of the headlines (perhaps no accident), the most interesting aspect of Facebook's response lies in what they have and haven’t said to the Board's 19 recommendations. Of particular significance are the responses in relation to the moderation process, pressure from governments, and Facebook's role in the Capitol riots. 

Newsworthiness

Facebook has provided further details on its "newsworthiness" allowance, whereby they allow content to remain even if it breaches their rules "if it’s newsworthy and if keeping it visible is in the public interest". They stated that they no longer consider posts by politicians to be inherently in the public interest – a significant change of policy – and that they now "simply apply our newsworthiness balancing test in the same way to all content" (seemingly a response to the Board's strong recommendation that all users are treated equally – although how this squares with the new rules for restricting the accounts of public figures is unclear).

Cross checking

Facebook have shone some further light on their cross checking policy – a policy that was only first revealed in the Board's Trump decision. They explain that "We employ an additional review, called our cross check system, to help confirm we are applying our policies correctly for content that will likely be seen by many people…"

Critically, they do not explain what the process is, who decides what constitutes "content that will likely be seen by many people", or who carries out the further review. For example, it is unclear whether such content can be escalated to those making commercial and/or political decisions.

Notwithstanding this policy, Facebook claims that all users are treated the same. The fact that we learned from the Trump decision that there were 20 pieces of content posted by Mr Trump that were originally marked as violating Facebook's rules but which, after the cross-check process, were ultimately determined not to may suggest otherwise.

The only Board recommendation that Facebook refused to implement at all was that they "should report on the relative error rates and thematic consistency of determinations made through the cross check process compared with ordinary enforcement procedures", claiming that to do so is not feasible as "we do not have systems in place to make this comparison". Given Facebook's technological capabilities, it is hard to believe that they could not put such systems in place should they wish to. If all users are treated the same, then why would Facebook resist providing this data?

Strikes

Facebook have now set out in their Transparency Center when "strikes" will be applied, and what the consequences will be. This is progress, although they have left themselves with a lot of wriggle room as to when they will take action (although given the amount and variety of content that they review, perhaps this is necessary). It also begs the question as to why it has taken so long, and the intervention of the Board, for Facebook to make this seemingly elementary information available.

Pressure from governments

One of the thorniest issues that Facebook has to deal with is what it should do when governments seek to become involved in the content moderation process. What happens if content is unlawful in a particular jurisdiction, but does not breach Facebook's rules? Or if a government states that if content is not removed then Facebook will no longer be able to operate in the jurisdiction?

The Board recommended that "Facebook should resist pressure from governments to silence their political opposition and consider the relevant political context, including off of Facebook and Instagram, when evaluating political speech from highly influential users." Facebook's response was essentially that they already do this and will continue to seek to improve (i.e. make no hard commitments).

Facebook did however provide further insight into how they react to formal government reports that content violates local law. First they consider if the content breaches the Facebook Community Standards – if it does, they remove it. If it doesn't, they "conduct a careful legal review to confirm whether the report is valid, as well as human rights due diligence". Where they "believe that reports are not legally valid, are overly broad, or are inconsistent with international human rights standards, we may request clarification or take no action." If action is taken, access will only be restricted in the relevant jurisdiction.

Again Facebook has left itself with a lot of room to manoeuvre, and provided no detail on who makes these (admittedly very difficult) decisions. Is it the content moderation team? Or do they get escalated to those dealing with government relations or even senior executives? One would assume the latter but we do not know.

This issue is not going away. Many governments are upping the ante, with threats to take action against local employees and to remove Facebook (and other social media companies) from the jurisdiction altogether. The difficulty faced by Facebook was evidenced in its recent decision to block the hashtag "ResignModi", which it later rowed back on, claiming the original decision was a mistake.

Investigation into Facebook's role in Capitol Riots

Facebook have refused to conduct an internal review into their role in the Capitol Riots and report on their findings (although according to reports such a review has already taken place). Instead they have taken the position "that independent researchers and our democratically elected officials are best positioned to complete an objective review of these events". They point to the fact that they are working with a group of 20 academics "to look specifically at the role Facebook and Instagram played in the 2020 US election Investigation kicked off to outside experts".

At first glance this may seem reasonable, although it is not clear quite what data these academics will (and, more importantly, will not) be provided with, nor why they would not publish any research that Facebook have already carried out. Nonetheless, the fact that Facebook will not conduct (or at least reveal the results of) such research suggests that – unsurprisingly – they do not want to shine a light on their role in the riots or the preceding events, or, critically, give credence to the argument that that their business model is based on directing users towards incendiary content.

Oversight Board's reaction

The Board has put out a short statement in reaction to Facebook's response, in which it stated that it was "encouraged that Facebook is adopting many of the Board’s policy recommendations". It then somewhat pointedly commented that "The Board monitors Facebook’s implementation of all its decisions and recommendations, and intends to hold the company to account on its commitments".

In their responses to most if not all Board decisions, Facebook have made – often quite general – commitments to take certain actions, typically without giving any particular timeframe. It seems that the Board has sought to put down a marker, and remind Facebook that these commitments have not been forgotten, and that if they do not act as they said they would, the Board will hold them to account (or at least try to).

Conclusion

On the face of it, there is much in this response by Facebook to be applauded. They have acted swiftly and decisively in relation to Mr Trump (the question as to whether it was the right decision, or even a decision that should be made by a private company, is another matter). They have also provided greater clarity on their decision making processes in relation to content removal for all users.

Nonetheless, many critical questions remain. Are all users treated the same (and should they be)? How is Facebook going to protect free speech in the face of demands for censorship by governments? Who makes the "cross-check" decisions? And perhaps most importantly of all, what role did Facebook (and its business model) play in the Capitol riots and the lies propagated in the run up, and what role does Facebook play in the seemingly increasing divisions in society? The Board must continue to apply pressure.

How can we help you?
Help

How can we help you?

Subscribe: I'd like to keep in touch

If your enquiry is urgent please call +44 20 3321 7000

Crisis Hotline

I'm a client

I'm looking for advice

Something else