Hate Speech, free Speech and Draft Online Safety Bill

Published: Posted on

In this post, Dr Peter Coe discusses human rights implications of the draft Online Safety Bill

Dr Peter Coe

With the Draft Online Safety Bill returning to Parliament this month, and with significant changes to the Bill expected (For example, we already know that the ‘legal but harmful’ provisions will be scrapped, and to counter opposition to this change – the Labour Party has slammed it as “a major weakening” of the Bill – the Culture Secretary, Michelle Donelan, is saying that it has been replaced by a “triple shield” to protect users. This includes (i) a duty on in-scope companies to protect users from illegal content which is being bolstered by the creation of new crimes; (ii) a strengthened obligation on in-scope companies to enforce their terms and conditions – especially those about access for children; (iii) a requirement that platforms introduce a system allowing users more control to filter out harmful content they do not want to see: albeit, how this will work in practice, and what, if any, duties this would impose on in-scope companies is unclear), I thought this would be a good opportunity to discuss an aspect of the Bill (as it currently stands), that is concerning from a free speech perspective: that is, how it proposes to deal with hate speech.

Hate speech is a problem that is exacerbated by the pervasiveness of social media and by the ability platforms give to users to publish anonymously or under a pseudonym. Despite platforms promising, in various different ways, to tackle such speech, and despite them signing up to a plethora of voluntary codes, in recent years there has been a growing number of voices from different groups arguing that platforms are not doing enough, or simply do not care enough, to tackle the problem, which not only persists, but seems to be getting worse. Indeed, in the aftermath of England’s loss to Italy in the European Championship final last year, as a result of England players Marcus Rashford, Jadon Sancho and Bukayo Saka facing a torrent of hate speech on Twitter, and it subsequently transpiring that although the platform permanently suspended the accounts of fifty-six persistently abusive users on the 12 July 2021 (the day after the final) thirty of those offenders continued to post, or ‘respawn’, on the network, often under slightly altered usernames, the situation came to a head. Dame Melanie Dawes, the Chief Executive of Ofcom, (which will be the Online Harms Regulator once the Bill is enacted), stated that these events brought ‘[t]he need for regulation … into even sharper focus’ and that ‘the platforms failed to do enough to remove these appalling comments at a critical national moment. They simply must do far better than this in the future.’

Consequently, hate speech is one of the reasons the UK government, like governments around the world, is under pressure to find a way to sanitise our online environment, and as we all know, its answer to this is the Draft Online Safety Bill.

So, what is hate speech and why does its definition or meaning matter for the purposes of free speech? It’s this question that I want to discuss in this blog. And, as we shall see, it matters a lot. Conceptually hate speech has been classified by various commentators as abusive speech that targets members of certain groups, which are, typically, minority groups; a broad conceptualisation that accords with, for instance, the UK government’s classification of ‘hate crime’ in its Online Harms White Paper (HM Government, CP 354, April 2019) (at [7.16]). So far, so simple. You would perhaps think, therefore, that identifying hate speech is easy, but, as I hope will become apparent, determining what is, or is not hate speech, and consequently defining hate speech for the purposes of law, and how this intersects with free speech, continues to be problematic, in part because what we perceive as being, and therefore define as hate speech, changes regularly. By way of example, making misogyny a hate crime is included in Lord McNally’s Private Members Bill: Online Harms Reduction Regulator (Report) Bill. However, the former Home Secretary, Priti Patel, rejected attempts to classify misogyny as a hate crime, arguing that it would deliver only ‘tokenistic’ change and that adding it to the scope of hate crime laws would make it harder to prosecute sexual offences and domestic abuse.

In the context of the Draft Online Safety Bill, for reasons I will explain, the changing definitions of hate speech, and the incoherency this creates, raises free speech concerns.

Article 10(1) of the European Convention on Human Rights protects freedom of expression by providing: ‘Everyone has the right to freedom of expression. This right shall include freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers’. Article 10(2) qualifies this right, in that a state can restrict the Article 10(1) right in the interests of, inter alia, ‘the prevention of disorder or crime, for the protection of health or morals, for the protection of the reputation or rights of others’. In respect of the offline world, the European Court of Human Right’s jurisprudence gives the protection afforded by Article 10(1) considerable scope, in that it consistently holds that it is applicable not only to information or ideas ‘… that are favourably received or regarded as inoffensive or as a matter of indifference, but also to those that offend, shock or disturb the State or any sector of the population. Such are the demands of that pluralism, tolerance and broadmindedness without which there is no “democratic society”’ (Handyside v United Kingdom App no 5493/72 (ECHR, 7 December 1976) [49]; see also Sunday Times v United Kingdom (No. 1) App no 6538/74 (ECHR, 26 April 1979) [65]; Lingens v Austria App no 9815/82 (ECHR, 8 July 1986) [41]; Axel Springer AG v Germany (No. 1) App no 39954/08 (ECHR, 7 February 2012) [78]; Thorgeir Thorgeirson v Iceland App no 13778/88 (ECHR, 25 June 1992) [63]).

Albeit, it has to be said the Strasbourg Court’s case law has indicated that it is prepared to limit this wide scope to take account of the amplification of the threat posed to countervailing fundamental rights by the internet and online speech, so long as this limitation falls legitimately within the parameters imposed by Article 10(2). Indeed, in Delfi AS v Estonia App. no. 64569/09 ECHR, 16 June 2015, [110] the Court said that ‘defamatory and other types of clearly unlawful speech, including hate speech and speech inciting violence, can be disseminated like never before, worldwide, in a matter of seconds, and sometimes remain persistently available online’

When referring to ‘illegal content’, by saying, at clause 41(2)(a) and (b) respectively that for user-to-user services (so an internet service that enables user-generated content, such as Facebook or Twitter) illegal content is regulated content that amounts to a relevant offence, and for search services it is content that amounts to a relevant offence, the Bill effectively delegates the definition of those offences to other legislation. Unfortunately, for reasons I have already alluded to, the definition of hate speech, and therefore concomitant crimes, is murky and can or could lead to confusion amongst the public, platforms, Ofcom and even prosecutors, which in turn can have serious implications for the operation of free speech. Without a clear definition of hate speech, it is potentially very easy for the ECtHR’s established free speech principles to be illegitimately, but perhaps accidentally, restricted; an issue summed-up in evidence presented to House of Lords Communications and Digital Committee by Ayishat Akanbi, who suggested that the distinction between hate speech and ‘speech we hate’ can be hard to see (House of Lords Communications and Digital Committee, ‘Free for all? Freedom of expression in the digital age’, 1st Report of Session 2021-22, HL Paper 54, 22 July 2021, 17, [7]). This is not helped by regular changes to definitions of hate speech, as I mentioned at the beginning of this blog, and the different legal parameters of hate crime that exist across a raft of criminals laws, including: the Public Order Act 1986, the Crime and Disorder Act 1998, the Criminal Justice Act 2003, the Malicious Communications Act 1988, the Racial and Religious Hatred Act 2006, the Communications Act 2003 and even the Football (Offences) Act 1991. Consequently, these laws were subject to a Law Commission consultation that investigated how they should function in practice and possibilities for reform (Law Commission, Hate crime laws. A consultation paper (Law Com CP 250, 23 September 2020).

So, this leads me on to the mechanics of the Bill itself. Clauses 12 and 23 set out a general duty applicable to user-to-user and search services respectively to ‘have regard to the importance of’: (i) ‘protecting users’ right to freedom of expression’ and (ii) ‘protecting users from unwarranted infringements of privacy’. In addition, clause 13 provides ‘duties to protect content of democratic importance’ and clause 14 prescribes ‘duties to protect journalistic content’. However, unlike the clauses 12 and 23 duty, the clause 13 and 14 duties only apply to ‘Category 1 services’ (which are currently undefined user-to-user services to be included in a register maintained by Ofcom, pursuant to clause 59(6)). The fact that the core free speech duties pursuant to clauses 12, 13 and 14 of the Bill only require platforms to ‘have regard to’ or, in the case of clauses 13 and 14, ‘take into account’, free speech rights or the protection of democratic or journalistic content, means that platforms may simply pay lip service to these ‘softer’ duties when a conflict arises with the legislation’s numerous and ‘harder-edged’ ‘safety duties’. This distinction between the harder and softer duties gives intermediaries a statutory footing to produce boiler plate policies that say they have ‘had regard’ to free speech or privacy, or ‘taken into account’ the protection of democratic or journalistic content. So long as they can point to a small number of decisions where moderators have had regard to, or taken these duties into account, they will be able to demonstrate their compliance with the duties imposed by the Bill to Ofcom. It will be extremely difficult, or perhaps even impossible to interrogate the process. Furthermore, as I have already mentioned, the Strasbourg Court is clear that although it is prepared to accept greater limitation of the scope of Article 10(1) in the context of online speech, this limitation must still fall within the parameters of Article 10(2). Arguably the requirement that clause 12 imposes on platforms to merely ‘have regard to the importance’ of ‘protecting users’ right to freedom of expression within the law’ does not go far enough to ensure the Bill complies with this jurisprudence.

Thus, by making online intermediaries responsible for the content on their platforms, the Bill requires them to act as our online social conscience, thereby making them defacto gatekeepers to the online world. Although ‘privatised censorship’ has taken place on platforms such as Facebook and Twitter since their creation, the Bill gives platforms a statutory basis for subjectively evaluating and censoring content. This, along with the potential conflict between the harder and softer duties, could lead to platforms adopting an over-cautious approach to monitoring content by removing anything that may be illegal (including content that they think could be hate speech) or may be harmful, and that would therefore bring them within the scope of the duty and regulatory sanctions. This risk is amplified by the lack of clear definitions of what is hate speech. Such an approach could lead to legitimate content being removed because it is incorrectly thought to be illegal. Cynically, it may also provide platforms with an opportunity, or an excuse, to remove content that does not conform with their ideological values on the basis that it could be illegal. And let’s not forget, rather than human moderation, platforms will be deploying algorithms and AI for this task, that will be programmed, I would imagine, to err on the side of caution.

What the Bill will look like when it eventually comes into force (and it certainly seems as though it is now a question of when rather than if) remains to be seen (and, in any event, much of the legalistic detail is uncertain and undefined and will be subject to secondary legislation post-enactment). But what I think is certain – and there are not many certainties when it comes to this Bill – is that the Bill’s treatment of hate speech (which is just one of its many contentious aspects) will be a source of argument and debate that will rumble on for some time to come. Indeed, with the Bill due to pass through the Lower House by the end of January 2023, giving the Lords just over two months to scrutinise it before the parliamentary session ends, we are already hearing warnings from some peers that this may not be enough time to analyse such a high-profile, complex, and contentious Bill thoroughly and complete the legislative process. We watch this space with bated breath!

A version of this post will appear as the Editorial in the February 2023 issue of Communications Law and is published here with kind permission.

Leave a Reply

Your email address will not be published. Required fields are marked *