As we begin to realise just how profoundly online misconduct is impacting society, calls are growing for the internet to be brought under real-world controls. There is, at the same time, a growing consensus that part of the answer at least, is regulation. Earlier this month, the House of Lords announced an inquiry into whether internet regulation "should be improved". This adds to an ongoing Parliamentary inquiry into fake news, as well as new, threatened legislation to deal with the impact of social media on children. But what exactly do we mean by regulation, what measures are workable, and what protections already exist?
There is no single or specific regulator for the internet; no supra-national 'online law'. The whole issue is a jurisdictional conundrum. In the UK, we have a patchwork of statutory and non-governmental organisations that regulate behaviour associated with the internet. The Information Commissioner’s Office, for example, oversees data protection and privacy; Ofcom regulates TV-like content from on-demand programme services; and the Advertising Standards Authority upholds online advertising standards. Other bodies, such as the Financial Conduct Authority, regulate content specific to a particular industry, such as the website content of a financial services company.
From a legal point of view, where online offences - including stalking and harassment - are committed in the UK, the courts naturally enough have jurisdiction. To a certain extent our common law system also gives judges the flexibility to reinterpret legislation for the digital age. See, for instance, the recent High Court findings on reconciling the 'right to be forgotten' – the right to ask search engines to 'delist' results about you that are inaccurate or outdated - with the Rehabilitation of Offenders Act. Parliament's intention, back in 1974, was to allow those with spent convictions to move on with their lives, while at the same time allowing publishers to report (without malice) on those convictions. But legislators could not have foreseen the scope or accessibility of such publication via the internet, and how rehabilitation could be undermined by a quick search on Google. Ultimately, the Court upheld a 'balancing exercise' approach to deciding whether or not search results should be removed, and confirmed that spent convictions should weigh towards delisting. That interpretation of a 40-year old law may or may not be satisfactory, but it is a reminder not to overlook current tools. One possible answer to increased crime in the online space is better policing, and creative application of existing provisions.
Beyond the English law position, EU law has a certain reach, although some would argue it hinders more than it helps. Social media platforms like Facebook qualify for a 'hosting' exemption under the E-Commerce Directive (2000) from liability for damages. This is on the basis that they have no knowledge of the content and then act promptly to remove it when alerted. Likewise, in the US (which also has no single regulatory agency governing the internet), the First Amendment protecting freedom of speech, and more specifically section 230 of the Communication Decency Act 1996, offer substantial protection to Internet Service Providers (ISPs) hosting third party content.
Such exemptions for internet platforms have created what The Committee on Standards in Public Life- in the context of a report on the online intimidation of those in public office - called a "perverse incentive" to shun active monitoring or moderation in favour of a ‘notice and takedown’ model: "Facebook, Twitter and Google are not simply platforms for the content that others post; they play a role in shaping what users see. We understand that they do not consider themselves as publishers, responsible for reviewing and editing everything that others post on their sites. But with developments in technology, the time has come for the companies to take more responsibility for illegal material that appears on their platforms."
The Committee went on to recommend legislating to shift the balance of liability for certain types of illegal content, such as death threats, onto social media companies, where the UK's obligations under the Directive fall away when we leave the EU. But they acknowledge the fundamental issue of legal liability, which any new regulation will need to address. Although the position is slightly different for each of hosts, ISPs and search engines, broadly speaking, and certainly pre-notification, none are considered to be publishers under English law. That means they are likely to escape primary liability as compared to the often anonymous online troll.
There are other issues to balance too. Many countries - albeit the US appears to be an important exception - subscribe to the principle of net neutrality: the idea that ISPs should treat all data equally. One of the aims is to prevent ISPs from charging content providers for 'fast lanes' that deliver content more quickly, while slowing content for others. But, fair as it seems to promote equality of access, rules that prevent ISPs from blocking or prioritising content might, if not carefully drafted, cut against moves to make them more, not less, of a gatekeeper.
There is clearly much to disentangle if regulation is to be effective and properly targeted, as well as proportionate. As Karen Bradley, former Secretary of State for Digital, Culture, Media and Sport, warned last year: “We need to be careful here that what we do is not a sledgehammer to crack a nut – a piece of legislation where we say under UK common law these platforms are now publishers, which could impact on freedom of speech, civil liberties and the ability of people to enjoy the benefits that the internet brings. But we have to do this in a way that doesn’t allow harm.” She suggested doing "as little legislatively as possible".
Nevertheless, it seems pretty clear which way the wind is blowing. Mark Zuckerberg himself, albeit under some duress, conceded that Facebook would "absolutely" work with the US Congress on the "right" regulation. He acknowledged: “The internet is growing in importance around the world in people’s lives and I think that it is inevitable that there will need to be some regulation. So my position is not that there should be no regulation but I also think that you have to be careful about regulation you put in place.”
To be effective, any new regulator would have to be independent, with enough muscle to deter and punish. Truly global regulation is likely some way off, but as the Government's Internet Safety Strategy recognises, we need in the meantime, through international institutions such as the UN and EU, to push for global changes to behavioural norms. And positives can be taken from the GDPR, which proves that tough, cross-jurisdictional regulation is possible. The internet, and the industry that underpins it, may be vast, dominant and amorphous, but we have relevant international regulatory models – such as those in aviation and financial services – to draw on.