In January 2026, the House of Lords passed an amendment to the Children's Wellbeing and Schools Bill, that would require social media companies to use "highly effective" age‑assurance measures to stop anyone under 16 from becoming or remaining a user on their platforms.
Lord Nash led the cross-party move, citing the impending "societal catastrophe" caused by children's addiction to social media. The amendment would effectively introduce a social media ban for under‑16s, broadly defined to cover messaging apps including WhatsApp, the social features of many online games, and social platforms from Instagram to TikTok. The amendment now returns to the Commons, where MPs may yet reject or reword it.
Meanwhile, the Government has announced it will launch a three-month consultation on children's phone and social media use to examine a broader set of possible changes to ensure that children have "healthy online experiences", including overnight curfews, "doom‑scrolling" breaks, and raising the digital age of consent from 13 to 16. The Prime Minister also announced on Monday that the Government would move fast to close loopholes in laws designed to protect children online, including tightening legislation around AI chatbots.
While a renewed focus on tackling the adverse risks of social media for children is commendable given the risks to children online, the consequences of any blanket ban deserve close scrutiny - particularly regarding potential interference with fundamental rights, enforcement, and the interplay with existing regulatory oversight.
What is the current legal framework?
Any social media ban in the UK would need to operate within the existing legal framework, including in particular:
- The Online Safety Act 2023 (OSA) platform accountability measures.
- Article 10 European Convention on Human Rights (ECHR) protecting freedom of expression and Article 8 ECHR protecting privacy rights; and
- The UK General Data Protection Regulation (UK GDPR) and Data Protection Act 2018 requirements for lawful, proportionate data processing.
What are the implications for children's rights?
As Chris Sherwood, CEO at the NSPCC, has publicly stated, while social media can be dangerous, "for countless children, especially those who feel shut out or unheard offline, social media isn’t a luxury. It’s a lifeline – a source of community, identity and vital support". This was reflected in the open joint statement signed by child protection charities and online safety groups, alongside academics and bereaved families.
A blanket ban risks isolating neurodiverse children and those within other protected groups who rely on social media for communication and support networks. They may suffer a disproportionate impact, losing access to peer communities and specialised support groups unavailable in their immediate physical environment. This might not be objectively justifiable and should be considered, including as part of the consultation, once published.
Would a ban interfere with parental rights?
A decision to ban under-16s would also potentially interfere with parents' established Article 8 rights to direct their children's upbringing, including to make decisions about their education and welfare. The principle of proportionality requires that state intervention into family life is justified and goes no further than necessary. Parents who believe supervised social media access benefits their child - whether to access educational content, maintain family connections, or to develop digital literacy in a controlled environment - may challenge the wholesale removal of that discretion. Overriding that parental autonomy could, in some cases, represent an unjustified intrusion into family life and parents' right to choose.
Would a ban be enforceable in practice?
The most significant challenge for any ban would be enforcement. Social media platforms operate internationally, and virtual private networks (VPNs) allow users to access internet content using IP addresses from anywhere in the world. VPNs are widely accessible, often free, and simple to use - making them an obvious workaround for tech-savvy teenagers. If young people widely flout the ban, the law risks being ineffective. There would also need to be clarity on who bears legal responsibility if under-16s do manage to access sites: platforms, children's parents or possibly, institutions when supervising children, including schools.
Although the ban could, in theory, have extraterritorial effect, as with the UK GDPR and OSA there is the practical question of how to compel compliance by global platforms headquartered overseas. All eyes will be on the success, or otherwise, of Australia's new Social Media Minimum Age requirements.
Is the proposed ban the right way forward?
A blanket ban may create more problems than it solves. Rather than providing clear protective measures, it could – in an already complex regulatory landscape - introduce enforcement issues and rights conflicts. It also risks isolating already vulnerable children.
Further, the existing legal framework - including measures to improve platform accountability under the OSA, criminal laws addressing specific harms, and data protection safeguards - already provides scope to address the known risks of social media use by children. We have not yet seen the full impact of platforms being held to account by regulators, particularly Ofcom and the ICO. Improved platform design, algorithmic transparency, enhanced parental controls and stronger digital literacy education may, at least in the first instance, be more effective – and proportionate - than a blunt and potentially unenforceable ban.