In mid-March, Facebook founder Mark Zuckerberg delivered his vision for the future of the social network in an essay entitled 'A privacy-focussed vision for social networking'. Twitter also recently announced an update to its abuse reporting functions, allowing users to specify personal information issues. These measures seem to signal a radical shift for social media sites – or do they?
Zuckerberg imagines a "pivot to privacy", from a network that has encouraged the openness, connection and sharing of a "town square" to – apparently – the privacy of a "living room". He recognises that, over the last 15 years, Facebook and Instagram have helped users connect with friends and communities, but now prefer "the intimacy of communicating one-on-one". He has also declared that "the future of communication will increasingly shift to private, encrypted services." Facebook plans to eliminate permanence, with content disappearing automatically, and it will allow for messages to be encrypted end to end.
These measures have come at the right moment. The Cambridge Analytica scandal and the introduction of the GDPR have focused public attention on how companies collect and use data, and the consequences of targeted content. Over and above the problem of fake news, there is increasing awareness, for example, of how political campaigns have harvested data to deliver micro-targeted advertising based on individuals' views and preferences. There is also growing alarm about how content shared on social media contributes to bullying in schools, and may even encourage self-harm where users are exposed to graphic images that fuel damaging behaviour. Compounding the problem is the speed with which content and misinformation can be spread, and the reluctance or inability of platforms to monitor and remove unlawful content. Earlier this year, a committee of the House of Commons found that "Companies like Facebook should not be allowed to behave like 'digital gangsters' in the online world". It urged them to take greater responsibility, including assuming more legal liability for content posted by users. To date, they have been largely successful in arguing they are "platforms not publishers".
Some may question, however, to what extent the published proposals are for the benefit of Facebook's public perception, rather than users. Facebook admits to having a privacy problem and, after a difficult 2018, fewer people trust the network than ever. How realistic is it that Facebook, and indeed other social media platforms, can and will change? As recently as 2010, Zuckerberg decreed that privacy was "no longer a social norm" – can its attitude really swing 180 degrees in less than a decade? At the same time, how will putting users' information beyond the reach of governments and regulators help to address issues around content and transparency? For sceptics, the recent announcements are merely a superficial bid to appease – and retain – users, without taking responsibility for the actions of users given a newfound reach and influence by powerful technology. Even if such measures bolster users' privacy, they are surely only part of the answer to the much wider question of if and how to police the internet.