The Bill is finally back in Parliament. How has it changed, and will it survive?
Through a press release, two Ministerial statements (available here and here) and a series of amendments – some visible in the latest draft, some yet to be tabled – the Government has indicated the broad thrust of changes to "refocus" the Bill on its original aims: protecting children and tacking illegal content, while preserving free speech, increasing tech accountability and empowering users.
The first two aims have more obviously been met. Social media platforms will now have to publish, not just carry out, risk assessments on the dangers their sites pose to children. They will have more defined responsibilities to provide age-appropriate protections. The Government has also added – or will add – to the Bill's existing list of criminal activity and illegal content several new offences, including assisting or encouraging self-harm online, and controlling or coercive behaviour.
The protections for free speech however, as well as revisions that are meant to change the behaviour of both platforms and users, are less clear. Category 1 services will no longer need to "address" legal but harmful content (there was no mandate to ban such content, but a danger that companies would err on the side of caution and censor merely offensive or controversial speech). Instead, as part of a "triple shield" for adult users, they will only be able to remove or restrict legal content in breach of their terms of service. They will also have to provide users with tools to tailor their online experience, for example to block anonymous trolls or certain content, as well as better reporting and complaints-handling mechanisms.
There is an obvious risk that holding some platforms to the terms of service they themselves set could prompt them to step back from the complex task of assessing harm. In other words, to monitor and moderate less, by watering down their commitments. At the same time, shifting responsibility onto users may tackle harm on an individual level but, as Ellen Judson of Demos cautioned, does nothing to stop harmful content being shared and amplified. This is particularly the case given that children and vulnerable groups will not necessarily have the capacity to opt out. Toby Young in the Spectator raised a different concern, that platforms will make "safe browsing" mode their default setting, filtering out contentious material.
We should not dismiss the Bill's long overdue push to stamp out illegal content and to protect children. But there is a fear that, in terms of the broader, more insidious harms, the triple shield leaves too much in the hands of platforms. Some may step up with more thoughtful terms, and engage more with content and users, but others, especially those already poor at policing themselves, will be fundamentally less accountable. They will continue to duck the nuanced, proactive monitoring and systemic changes – what content gets served to whom, and how much - that would make the internet a safer environment, but not a sterile one.
To ensure proper scrutiny of the key amendments, the Government has taken the unusual step of sending the Bill back to a Public Bill Committee, with the aim of reaching the House of Lords in January. There was real concern that this would not have left enough time for the Bill to be passed before the current Parliamentary session was due to end in spring 2023, which would have meant that it would have been scrapped altogether (or at least until after the next election). However, the Government has just announced that the Parliamentary session will now be extended until autumn 2023, making is very likely that the Bill will become law, although quite in what form remains unclear.
For more information, including our previous instalment and details of our forthcoming webinar, visit our Online Safety Bill Hub.
If you have any feedback or questions, do let us know.
To receive further updates as the Bill progresses, please sign-up to our Online Safety Bill mailing list.