Facebook's responses to the first set of decisions by the Oversight Board are in. As predicted by many, they are a mixed bag – not the model of transparency that Facebook suggest, but also not the complete stonewalling that some critics had imagined.
According to Facebook, of the 17 recommendations made by the Board, they are "committed to action" in relation to 11, are "assessing the feasibility" of 5, and have decided that "no further action" is required in relation to 1 (the recommendation to take a more nuanced approach to COVID-19 misinformation).
On the face of it, this is a positive and constructive response. At closer inspection however it is not so clear. First, the Board did not make 17 recommendations, but somewhere closer to 22; some recommendations have been combined, while others are entirely absent from Facebook's response. Second, while some of the responses are quite straightforward (e.g. "After the board surfaced this issue, we fixed the mistake"), most are commitments to "continue to explore" issues which at times do not directly address the specifics of the recommendations. Third, and linked to the previous issue, many of the "next steps" are not accompanied by timescales, and the impression is that this may be the last we hear of them.
The response to the recommendations in relation to the breast cancer awareness case in Brazil is of particular interest. As previously noted, when this case was chosen for review by the Board, Facebook acknowledged that it had made a mistake and requested that the Board cease to consider it; upon the Board's refusal to comply with this request, Facebook made submissions to the Board that it should focus on the outcome of enforcement and not the method. Clearly something about this case had hit a nerve (perhaps combined with concerns about the makeup of the review panel – which is not public information but presumably Facebook are informed).
Facebook's response contains no mention of the recommendation to "Implement an internal audit procedure to continuously analyse a statistically representative sample of automated content removal decisions to reverse and learn from enforcement mistakes." Facebook refused to commit to "Ensure users can appeal decisions taken by automated systems to human review when their content is found to have violated Facebook’s Community Standard on Adult Nudity and Sexual Activity", "Expand transparency reporting to disclose data on number of automated removal decisions, and the proportion of those decisions subsequently reversed following human review" or "Inform users when automation is used to take enforcement action against their content, including accessible descriptions of what this means." In each instance (aside from the recommendation to conduct an audit), reasons were provided for this stance and further work promised, but it is notable that these recommendations, which cut to the core of Facebook's business model, have not been adopted.
Nonetheless, there is merit to many of the responses that were provided. Interesting information is revealed, as are genuine difficulties that Facebook are considering. For example, in relation to the Board's recommendation that Facebook "Provide a public list of the organizations and individuals designated “dangerous” under the Dangerous Individuals and Organizations Community Standard", Facebook stated "Ahead of sharing more details about these terms, we need to ensure that this information will not allow bad actors to circumvent our enforcement mechanisms. Our teams need more time to fully evaluate whether sharing examples of designations will help people better understand our policy, or if we should publish a wider list." This seems like a very real concern, and one that does need to be looked at carefully.
In short, Facebook have responded as expected. They have provided some information, and made some changes to their policies; they have also highlighted real issues for further consideration and discussion. The recommendations that go to their bottom line, however, have been largely ignored or postponed, perhaps indefinitely.
Cynics would say that this is exactly what Facebook want – they have given the veneer of submitting to regulation without accepting any recommendations against their will (they have already made clear that they want the Board to look at outcome not method). If this is the case, they may have been somewhat disturbed to hear what one Board member, Alan Rusbridger, said last week in evidence to the House of Lords Communications and Digital Committee: "We’re already a bit frustrated by just saying ‘take it down’ or ‘leave it up’. What happens if you want to make something less viral? What happens if you want to put up an interstitial? What happens if, without commenting on any high-profile current cases, you didn’t want to ban someone for life but wanted to put them in a ‘sin bin’ so that if they misbehave again you can chuck them off?" Even more alarming was his statement that "At some point we’re going to ask to see the algorithm, I feel sure, whatever that means.”
At this stage, that seems very much like wishful thinking, but it is clear that the Board (or at least some of its members) is aiming to significantly expand its jurisdiction and powers. Certainly the Board will only grow in profile in the coming weeks: as Mr Rusbridger alluded to, its next major decision will be whether or not to uphold the ban on Donald Trump.