Mishcon de Reya page structure
Site header
Main menu
Main content section

Facebook “Supreme Court” comes out fighting

Posted on 19 February 2021

Eighteen months after the Facebook Oversight Board was announced with much fanfare, on 28 January it finally released its first decisions. The five decisions, which have since been followed by a further decision on 12 February, cover a variety of topical issues from all over the world, including Coronavirus misinformation and religious and ethnic hate speech.

The decisions raise many interesting questions. For example, at a time when Facebook is coming under heavy criticism for the content it doesn’t remove, five of the six judgments overturn the original decision by Facebook to remove content. Does this suggest that the Board will be more concerned with protecting free speech than the harm such speech can cause (and will this allow Facebook to move further in this direction under the cover of the Board)? Or is this simply a result of the narrow scope of the Board's ambit – it can only consider decisions to take down content, and not decisions to leave content up? Or simply that no such conclusions can be drawn after so few decisions have been made.

The decisions also shone a light on the opaque decision making process (we now know that there is a Dangerous Individuals and Organizations policy), and highlighted the potential tension between the three "laws" that are considered by the Board – Facebook's content policies, Facebook's values, and international human rights standards (what happens when they conflict?).

Perhaps most interesting of all however is the combative nature of these decisions. The Board had come under significant criticism before it has even made a single decision; it was seen as a PR stunt by Facebook – well paid "judges" providing a veneer of regulation in order to discourage actual regulation. This may be true, in part at least. The Board has no basis in law and Facebook cannot be compelled to follow its decisions. (Facebook has stated that it will follow all decisions on whether content should be re-instated and will consider all other guidance – however, it cannot be forced to). That said, it would be difficult for Facebook to completely ignore decisions made by the Board, especially with a new Democratic administration in Washington which will be under pressure from many in the party to clamp down hard on the social media companies. The only power the Board has is the power to embarrass Facebook; however, in the current political climate, this is not insignificant. The question was whether the Board would seek to use this power.

The first set of decisions very much suggests that they do. The most obvious example of this is the decision in relation to a post on Instagram in Brazil to raise awareness of breast cancer, in which there were eight photos each showing breast cancer symptoms with corresponding descriptions. The post was removed as a breach of Facebook Community Standards on adult nudity and sexual activity. After the Board decided to consider the case, Facebook reinstated the post and acknowledged it had made a mistake. It then argued that the case should no longer be considered by the Board as the mistake had been rectified. In the first sign of the Board’s willingness to flex its muscles, the Board disagreed and proceeded to consider the case.

In a further challenge to Facebook, in its decision the Board made a number of recommendations, including in relation to the method of decision making, notwithstanding Facebook's submissions that “Facebook would like the Board to focus on the outcome of enforcement, not the method”. These recommendations included the following:

  • Ensure that users are always notified of the reasons for the enforcement of content policies against them.
  • Inform users when automation is used to take enforcement action against their content.
  • Ensure that users can appeal decisions taken by automated systems to human review when their content is found to have violated Facebook's Community Standard on adult nudity and sexual activity.
  • Implement an internal audit procedure to continuously analyse a statistically representative sample of automated content removal decisions to reverse and learn from enforcement mistakes.
  • Expand transparency reporting to disclose data on the number of automated removal decisions, and the proportion of those decisions subsequently reversed following human review.

This puts Facebook in a real bind. Implementing these recommendations would be a massively costly exercise, both in the short and long term, and would involve a wholesale change to their content moderation processes. It would also require the disclosure of data that they would certainly prefer remained private, opening them up to far greater scrutiny. On the other hand, if Facebook refuses to act, it will cast real doubt on how seriously they are taking the whole process. It will be seen by many as evidence that self regulation cannot work, and that the only way forward is to introduce legally binding and enforceable regulation.

Facebook has until the end of the month to respond to the allegations. However, in their immediate response they foreshadowed that a substantive response would probably take longer: "Some of today’s recommendations include suggestions for major operational and product changes to our content moderation — for example allowing users to appeal content decisions made by AI to a human reviewer. We expect it to take longer than 30 days to fully analyze and scope these recommendations.

The Board has fired the opening shot; now it's over to Facebook.

How can we help you?
Help

How can we help you?

Subscribe: I'd like to keep in touch

If your enquiry is urgent please call +44 20 3321 7000

Crisis Hotline

I'm a client

I'm looking for advice

Something else