The 2019 Christchurch massacre was a ‘performance crime’ perpetrated by Brenton Tarrant. The Australian was first radicalised in a series of far-right online echo chambers and infamously chose to livestream his shootings, with the clip he posted attaining more than 4,000 views before it was eventually taken down. Various reuploads can still be found on the internet to this day, where they continue to inspire extremists worldwide.
Christchurch was not the first mass terrorist incident with links to online platforms. In the years leading up to it, Europe alone had endured five deadly terror attacks in Brussels, London, Paris, Nice, and Berlin, which were all either coordinated via social media or perpetrated by attackers who had been radicalised online. Clearly an international problem, it required an international solution.
Exactly two months following the Christchurch attacks on May 15, 2019, New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron brought together leaders from around the world to adopt the Christchurch Call to Action.
The Christchurch Call was a commitment by governments and tech companies to eliminate terrorist and violent extremist content online while ensuring that the internet remained otherwise free and secure for all. To date, the call has been supported by 62 countries, organisations, and tech platforms, including the European Commission and 21 EU members.
On May 15, world leaders convened for a video conference to mark the second anniversary of the Christchurch Call. At the meeting, Ardern called on governments to deepen their understanding of the role that algorithms play in the radicalization process so that they can make positive interventions in the future and prevent further attacks.
Two weeks ago, the European Parliament approved the passage of the EU’s Regulation on the Dissemination of Terrorist Content Online (TCO), a sweeping piece of legislation which, among other things, will require online platforms to remove content deemed terrorist in nature, within one hour of notification by authorities. The TCO will work in tandem with the upcoming Digital Services Act (DSA) to protect citizens from online hate speech and extremist propaganda.
Germany, France, and the UK have all recently passed or introduced legislation of a similar nature.
Recently the French government unveiled a new counterterrorism and intelligence bill allowing for greater surveillance of extremist online networks. One measure in the bill proposes to extend the use by French intelligence services of algorithms to track down extremists online.
Earlier in May, the German parliament approved an amendment strengthening the Network Enforcement Act (NetzDG) by expanding transparency obligations for social media companies and other online actors, improving user-friendliness, and regulating researchers’ access to social media data.
This month, the UK Government unveiled the first draft of its Online Safety Bill, which will allow the Office of Communications to fine major platforms up to £18m for failing to remove harmful content. As is to be expected of democratic legislation in general, all of these bills have received pushback. Although the TCO, for example, was passed without any official objections, during the debate proceedings its passage a number of MEPs did express reservations about the possibility of it shrinking the space for legitimate public debate.
Likewise, the Free Democratic Party in Germany has opposed NetzDG on the grounds that it endangers freedom of expression, and UK Prime Minister Boris Johnson is facing opposition to the Online Safety Bill from MPs within his own party.
However, these criticisms demonstrably miss their target. While, as the Christchurch Call acknowledges, freedom of expression is important to maintain, online platforms as such have never truly been ‘free’.
The content they feed to us is, to Ardern’s point, largely controlled by internally managed algorithms. We possess little insight into how they work, and the decisions made to change them are entirely outside of democratic control. These companies themselves are not accountable to you and me, but to shareholders who will allow and disallow whatever it is that maximises profits.
Even if governments were to abdicate their duty of protection to citizens and fail to moderate these platforms, the PR crises that terrorist incidents such as Christchurch generate often induce drastic swings in platform policy, which those championing freedom of expression should have far more cause to worry about.
In this landscape, accountable government legislation is the best option we have. The emergence of so much of this legislation all at once belies a long period of gestation, in which all of the policies we’re seeing were openly formulated and revised in order to ensure they can be accepted by a majority of citizens.
The two-year anniversary of the Christchurch Call serves as a reminder of what it takes to pass content moderation laws in democratic societies. These are not blanket bans on speech but thought-out agendas meant to sustain healthy and open debate in the long run. Those who want to protect freedom of expression ought to be able to see the essential role they serve.