The Online Safety Bill returned to the Commons yesterday, with the aim of moving to the Lords in February and becoming law by autumn. The fight to shape it is by no means over. In response to what has been described as "watering down" by removing provisions relating to legal but harmful content, campaigners are pushing for various amendments, including to reinstate those provisions, or else different levers to ensure that tech companies are held accountable for real harms.
One area of concern is that, as the Bill stands, the biggest platforms are not under enough pressure to tackle online misogyny. We know that women face disproportionate abuse and harassment online, which only increased during the pandemic. According to recent reports by charity Refuge, technology-facilitated domestic abuse is closely linked to women’s physical safety. Almost one in five survivors of domestic abuse via social media (17%) said they felt afraid of being attacked or being subjected to physical violence because of the tech abuse. 15% felt their physical safety was more at risk, and 5% felt more at risk of so-called “honour”-based violence. In addition, 12% of survivors felt afraid to leave the house because of the abuse. 95% of domestic abuse survivors said they were not satisfied with the support they received after reporting domestic abuse-related content to a social media company.
Last month, End Violence Against Women (EVAW), a coalition of charities, academics and campaigners, urged the Government to introduce within the Bill a mandated Violence Against Women and Girls (VAWG) Code of Practice. This would provide recommended guidance and best practice on the appropriate prevention and response to VAWG, and give it a similar level of focus to content relating to terrorism and child sexual exploitation and abuse, which already have codes mandated by the Bill. An accompanying petition for more specific measures, led by EVAW and the internet safety charity Glitch and presented to Government yesterday, had already gathered more than 88,000 signatures. It deserves widespread support.
One of the insidious aspects of online misogyny, as with so many online harms, is that offline or street-level abuse is simply not the same – in terms of spread and impact – as when it happens online. Misogyny is not new, but has been given an unprecedented platform, as seen by the widespread sharing of content and views held by online personalities such as Andrew Tate.
The impact of platforms driving harmful material to vulnerable users came into sharp focus in the tragic case of 14-year-old Molly Russell. An inquest found she "[suffered] from depression and the negative effects of online content", ultimately resulting in her death. In some cases, the coroner found, the content was "particularly graphic", and "romanticised acts of self-harm". The way the platforms used algorithms also led, he added, to "binge periods" of content, sometimes selected and provided without Molly requesting them.
Focusing on illegal content is a welcome starting point for online safety, but not enough. The most powerful tech companies need to deploy their considerable wealth and technical skills to address not just how they monitor and remove harmful content, but how, and to whom, that content is served. The debate we need to have – over the next few months, both inside and outside Parliament – is a much more nuanced version of the age-old tension between privacy and free speech. It is a discussion about where we draw the line with respect to online speech when words – and content more generally – can be amplified by sophisticated algorithms, and targeted at users who may be powerless to turn away.