The Online Safety Bill is a sweeping piece of proposed law regulating online expression but, as currently drafted, it could lead to uncertainty, and a hefty cost for impacted businesses.
The UK Government has published a first draft of the long-awaited Online Safety Bill, setting out new measures to "address illegal and harmful content online, with the aim of preventing harm to individuals in the United Kingdom" (Explanatory Notes). The Bill seeks to implement the online safety framework initially outlined in the Online Harms White Paper, and subsequently remoulded by the current Government in its Full Response, published in December 2020.
The draft Bill is substantial and complex, and will need careful scrutiny in order to analyse its full implications. However, it is already apparent that it could have wide-reaching and potentially serious consequences for a broad spectrum of businesses in its scope, if implemented in its current form.
We have briefly outlined the headline issues below.
- The Bill applies to all user-to-user and search services with links to the UK, and imposes on the providers of those services significant and multi-layered duties of care, as well as duties focused on record-keeping and transparency. These duties relate to illegal content and, for some services, content that is 'harmful' to adults and/or children (but which may not be illegal content).
- Subject to a few limited exemptions (for example, internal message boards and intranets, email-only services and SMS/MMS-only services, and 'below the line' content on media articles and reviews of directly provided goods and services), 'user-to-user services' essentially covers any service which enable users to upload or share content which may be encountered by other users (including both publically and privately). For example, this would include online marketplaces, dating apps, games with chat functions, forums and social media platforms, including where users can interact via direct messaging. The potential breadth of businesses this definition could encompass would suggest that the Government's estimate that only 24,000 businesses will fall within the Bill's scope is conservative. In fact, a much larger number of online providers will, at the very least, need to consider whether the Bill applies to their operating models and, if so, the obligations to which they will then be subject.
- As noted above, only those services that have 'links to the UK' fall within the scope of the Bill. However, this is very widely defined in the Bill and includes services (i) which have a significant number of users in the UK; (ii) where UK users form one of the provider's target markets; or (iii) that are capable of being accessed by UK users and where there are reasonable grounds to believe there is a material risk of significant harm to individuals in the UK. As a consequence, international providers could look to prevent access by UK users in order to avoid the legislative burden of the Bill, which could have a knock-on effect on revenue streams, such as advertising.
- A smaller group of companies thought to pose a higher risk to users will be deemed Category 1 services, and be subject to additional duties of care. Ofcom will categorise companies as Category 1 based on thresholds to be set out in secondary legislation, which will correspond to the number of users and functionalities, and take into account the risk of harm from content. The Bill does not set out the threshold; however, it is anticipated that Category 1 will only capture the largest social media platforms.
- All regulated providers are under an obligation to assess whether it is possible for children to access their service and, if so, to determine whether the 'child user condition is met.' Additional duties are imposed on those who meet the criteria. The Bill establishes that any analysis carried out under the 'child user condition' must be based on evidence about who actually uses the service, rather than the intended user. This could ultimately require relevant providers to embed age verification processes into their services in order to collect data about who is using their service.
- The Bill imposes myriad obligations on all in-scope providers including undertaking risk and impact assessments, reporting obligations, and establishing complaints procedures. In particular, all regulated providers must implement processes to mitigate the risk of illegal and priority illegal content (which will be defined by secondary legislation), whilst simultaneously balancing a general obligation to have regard to users' freedom of expression and protection of users from unwarranted privacy infringements. 'Illegal content' is defined by reference to 'relevant offences' (such as terrorism and child sexual exploitation and abuse offences, as well as offences to be specified in Regulations, but specifically excludes, for example, IP infringements and the sale of illegal goods). Content will amount to a relevant offence triggering duties under the Bill where the service provider has reasonable grounds to believe that the use of the words etc in the content, or dissemination of the content, will constitute a relevant office – regardless of whether that is in fact the case.
- Transparency obligations sit hand in hand with the duties of care outlined above. All in-scope providers are required to set out in their terms of service how individuals will be protected from illegal content and produce an annual transparency report. Category 1 services will also have to publicise how harmful content to adults is dealt with, and the "positive steps" they have taken to protect users' right of freedom of expression and unwarranted infringements on privacy. Such provisions may also expose businesses to a greater risk of civil litigation claims from individuals. Whilst the Bill itself does not include compensation rights for individuals (and, in fact, the White Paper specifically noted that, "the regulatory framework will not establish new avenues for individuals to sue companies"), in practice, increased availability of information will inevitably pave the way for a fresh batch of litigation in this space.
- Category 1 providers, and those whose services meet the criteria in relation to access by children, must comply with additional duties relating to 'harmful content'. The definition of 'harmful content' in the Bill is particularly problematic. Harmful content for adults (other than designated priority content) includes content where the relevant service provider has reasonable grounds to believe that there is a risk of the content having a significant adverse physical or psychological impact on an adult with ordinary sensibilities. The explanatory note clarifies that this would include short or long term depression and anxiety. A similar definition is applicable to content that is harmful to children. These are very subjective issues and the nebulous language is likely to result in providers taking a more cautious approach when it comes to content moderation (including in relation to potentially lawful content).
- The wide-ranging nature of the obligations on all in-scope providers under the Bill will undoubtedly make them onerous and costly to comply with, and could lead to providers using enhanced monitoring and over-blocking content through automatic filters (not least now that the UK is no longer required to legislate in line with the EU E-Commerce Directive regime, which includes a prohibition on requiring general content moderation). The Government's impact assessment estimates that complying with the moderation and filtering obligations alone will cost in-scope providers £1.7bn over the first 10 years. In particular, such a cost burden will put a strain on start-ups and SMEs, placing larger established providers at an advantage, with the real risk that innovation in the industry could be stifled.
- Category 1 providers will additionally have a duty to protect 'content of democratic importance', which includes any content that 'is or appears to be specifically intended to contribute to political debate in the UK'. This introduces another element of subjectivity into the Bill. Category 1 providers will also be required to safeguard users' access to 'journalistic content', which is very broadly defined. Whilst these provisions attempt to tackle the inherent tension between content moderation and freedom of speech within the Bill, the vague drafting is likely to stifle their impact in practice.
- The draft Bill is light on detail in parts, with certain aspects to be fleshed out through secondary legislation and codes of practice. For example, Category 1 providers have particular obligations in relation to 'priority content that is harmful to adults'. However, the Bill does not set out what type of content would be included. Instead, the Bill will rely on secondary legislation alongside codes of practice and guidance published by Ofcom to fill in the gaps. Even where the Bill does provide some level of comprehensive detail, the Secretary of State has wide powers to move the goal posts. For example, the Secretary of State can repeal a number of the exempt services listed in Schedule 1 if it is subsequently determined that they present a risk of harm to individuals. This creates a backdrop of uncertainty against an already challenging road to compliance.
- The potential penalties under the Bill are significant. Ofcom can fine companies up to the higher of £18m or 10% of their global annual turnover for non-compliance. The Bill also reserves the right to introduce criminal sanctions for senior managers if they fail to comply with information requests from Ofcom.
- In addition, Ofcom is given extensive new powers and responsibilities under the Bill. To name a few, Ofcom will have the power to issue technology notices and enforcement notices, and to apply to the courts for service and access restriction measures. It will also be responsible for undertaking risk assessments, alongside maintaining a register of providers. The Government has estimated that Ofcom will need almost £46m per annum to fulfil its obligations. To cover this cost, Ofcom can require companies who meet a threshold, based on worldwide turnover, to pay an annual fee. Ofcom has discretion to determine both the revenue threshold and the level of fee, with companies given little recourse to dispute the charges.
The draft Bill will now be scrutinised by a joint committee of MPs before being formally introduced to Parliament, and we could see a number of revisions to the current draft before it moves to the next stage of the legislative process. It is important not to lose sight of the fact that the Bill is still in draft form and there is a window of opportunity for businesses impacted by the measures to raise their concerns in relation to some of the issues identified above. Please contact us if you would like to discuss any of the proposals in more detail.
In the meantime, businesses potentially in scope of the Online Safety Bill will also likely need to track the progress of the EU's proposed Digital Services Act (discussed in our December 2020 update). The focus of the Digital Services Act is on online intermediaries' obligations in relation to the removal of illegal (not harmful) content, albeit illegal content is more widely drawn than under the Online Safety Bill (extending to, for example, goods infringing IP rights). The EU regime will preserve the prohibition of general monitoring obligations, but like the Bill does contain a series of obligations on online intermediaries. The EU's Digital Services Act will apply where there is a 'substantial connection to the EU' and so certain providers will find themselves in scope of both and having to put in place appropriate processes which comply with both the EU and UK regimes.