The Government has committed to putting the Online Harms Bill before Parliament next year. It will cover a broad range of online services that host user-generated content accessible in the UK, or facilitate public or private online interaction between one or more UK users, including social media, dating apps and search engines, but will divide those services into tiers. Only the most high-risk, high-reach businesses – such as Facebook, Instagram, Twitter and TikTok – will fall within "Category 1" and face the strictest requirements. They will be expected to i) remove and prevent the spread of illegal content and activity; ii) protect children; and iii) tackle content that is legal but harmful to adults. The Bill will set out a general definition of harmful content, as well as "priority" categories such as material relating to eating disorders, self-harm or suicide. The Government estimates that less than 3% of business will be in scope, of which the vast majority will be "Category 2" businesses, which will not have to deal with the final group of content that is lawful, but harmful to adults. All companies in scope however, will have a new "duty of care" towards users. Importantly, content published by a news publisher on its own site, including user comments, will not be in scope, and there will be "robust protections" for journalistic content shared on in-scope services. There are also exemptions for "low-risk" services, including services used internally by businesses such as intranets; online services managed by educational institutions; and communications sent by email, telephone or SMS.
The new legislation will be overseen and enforced by Ofcom, the existing media regulator, which will have the power to issue substantial fines of up to £18m or 10% of global turnover, whichever is higher, as well as imposing "business disruption measures" including, in the worst cases of non-compliance, blocking those services from being accessed in the UK. Ofcom will take a "risk-based and proportionate approach" to regulatory activity, focusing on companies whose services pose the greatest risk of harm. In the meantime, the Government will publish interim (non-binding) codes on terrorism, child exploitation and sexual abuse, to help companies start to implement the necessary changes and bridge the gap until Ofcom issues its statutory codes of practice.
Head of Reputation Protection, Emma Woollcott commented:
"There is an urgent need to address online harms. We welcome the Government's commitment to bringing a Bill before Parliament next year, as well as its plans for interim guidance before the new laws come into force. It is right that the larger platforms will bear the burden of investing in tools and technology to keep people safe online, particularly children. There are important questions still to be answered, including how Ofcom will tackle harmful but legal content (such as COVID-19 vaccine disinformation) so as to protect freedom of speech, and how the burdens imposed on the wide range of businesses in Category 2 will be defined and regulated. The Government has made clear that definitions will need to be flexible enough to respond to new dangers, but clear enough to ensure that regulation is proportionate yet effective. There are significant challenges to grapple with, including how the new law will balance the privacy benefits of end-to-end encryption versus the push for transparent monitoring and reporting. We applaud the Government's momentum but await these finer details."
In addition, Emily Dorotheou, Associate, remarked:
"The Government will need to ensure that its Online Harms Bill is aligned with the ICO's Age Appropriate Design Code. The Code requires, of those companies within its scope, that "high privacy" settings are used as a default where children are accessing their online services. This ensures that the confidentiality of children's private communications (and their experience in a digital environment) is respected. Unless the settings are changed, companies' use of children's personal data is limited to that which is "essential" to provide the service unless companies can demonstrate a compelling reason for a different default setting, taking into account the best interests of the child. Therefore, if the Bill is not aligned to the Code, there is a risk that companies will be simultaneously required to monitor children's communications for harmful content yet also required to keep them confidential."