The Online Safety Bill: Where Are We Now and Will It Succeed?

Image via Shutterstock

The House of Lords is currently debating at Committee Stage the Online Safety Bill, a landmark piece of legislation which introduces a new set of internet laws to protect children and adults from online harms.

The Bill will establish a regulatory framework for certain online services. These include user-to-user services, such as Instagram, Twitter and Facebook, and search services, such as Google.

The UK government’s stated aim in introducing the Bill is “to make Britain the best place in the world to set up and run a digital business, while simultaneously ensuring that Britain is the safest place in the world to be online”.

The BIll will place duties of care on both regulated user-to-user service providers and regulated search service providers. The regulated service providers would have duties relating to, among other things: (a) illegal content; (b) protecting children; (c) user empowerment; (d) content of democratic importance, news publisher content and journalistic content; (e) freedom of expression and privacy; and (f) fraudulent advertising.

The Bill also does two other distinct but interconnected things. It introduces age-verification requirements in relation to pornography providers (which are not user-to-user); as well as new criminal offences, e.g., encouraging self-harm and epilepsy trolling.

This makes it a long, wide-ranging and complex Bill.

Moreover, the Bill will place more responsibility on technology giants to keep their users safe. It will give Ofcom, the UK’s communications regulator, the power to levy fines against non-compliant providers, and would make senior managers liable to imprisonment for not complying with a direction to provide Ofcom with information.

But what impact is the BIll expected to have? What concerns are there about the implementation of this new regime?

Prof. Lorna Woods (Professor of Internet Law, University of Essex), who devised the systems-based approach to online regulation that has been adopted by the Government and whose work is widely regarded as laying the groundwork for the UK’s Online Safety Bill, was recently interviewed on this new regulatory approach.

Photo by Austin Distel via Unsplash

On 11 May 2023, Prof. Woods stepped inside BBC Radio 4’s Briefing Room to be interviewed by David Aaronovitch. She talked about what is actually in the Bill, how the new internet laws are intended to work and what potential weaknesses still remain. The programme can be accessed here.

Prof. Woods also joined Conan D’Arcy of the Global Counsel tech policy team to talk about the UK tech regulation, discuss recent criticisms of the Online Safety Bill, as well as the regulation of generative AI tools like ChatGPT. You can listen to the podcast here (published on 17 May 2023).

New Standards Code launched by press regulator IMPRESS

Photo by Jon Tyson on Unsplash

By Alexandros Antoniou, Essex Law School

On 16 February 2023, the press regulator IMPRESS launched its new Standards Code, with key changes including guidance on AI and emerging technologies, stricter measures on tackling misinformation, stronger safeguarding guidelines, and a lower discrimination threshold.

Background

IMPRESS is the only British press regulator to have sought formal approval from the Press Recognition Panel (PRP). The Panel was established in the aftermath of the phone-hacking scandal to ensure that any future press regulator meets certain standards in compliance with the Leveson report recommendations. IMPRESS is distinct from the Independent Press Standards Organisation (IPSO), Britain’s other press regulator which enforces the Editors’ Code of Practice but does not comply with the majority of the Leveson report’s independence requirements. IPSO regulates some of the more established UK press (e.g., the Mail newspapers, the News UK titles and their respective websites), whereas publishers regulated by IMPRESS tend to be newer and more digitally focused (e.g., Bellingcat, Gal-dem and The Canary). IMPRESS is viewed by some media campaigners (e.g., Hacked Off) as “the most popular” complaints-handling body in the country. Its membership has risen from just 26 publishers in 2017 to 113 today.

The IMPRESS Code was first published in 2017 with the aim of guiding media professionals and protecting the public from unethical news-gathering activity. It applies to all forms of news delivery, including print publications, news websites and social media, and to any individual or organisation gathering information and publishing news-related content. As the media landscape has rapidly evolved in the last few years, changes were introduced in February 2023 to help build trust and improve accountability in the industry, while covering a more diverse range of digital news creators (including publishers, editors, journalists, citizen journalists, reporters, bloggers, photojournalists, freelancers, and content creators) and their practices.

Some key changes

A major change concerned the issue of inaccurate content and was propelled by the challenges faced in distinguishing true information from misinformation and disinformation, including that generated by AI. To help journalists and publishers ensure that their material is supported by verifiable and legitimate sources, the Code and its associated Guidance on Clause 1 (Accuracy) and Clause 10 (Transparency) provide advice on fact checking and source verification, particularly within an online context. Specifically, the Code now requires publishers to exercise human editorial oversight to ensure the accuracy of any AI generated content, clearly label such content, and take reasonable steps to limit the potential spread of false information (deliberately or accidentally) by verifying the story with other sources and checking the information against other reliable sources.

Changes were also introduced in relation to the coverage of news stories involving children. They all acknowledge children’s media literacy, autonomy, and protections that are necessary to develop them as people. The revised Code defines a child as anyone under the age of 18 and places an obligation on publishers to “reasonably consider” requests from children to remain anonymous during news-gathering and publication (Clause 3.3), as well as requests from those under 18 when the article was published to anonymise that news content in the present day (Clause 3.4). This is a welcome recognition of the proposition that individuals should not be adversely affected later in life because stories that concern them as children remain widely available online. Importantly, under the new Code, an appropriate adult cannot veto a child’s refusal or revocation of consent (paragraph 3.1.2 of the Guidance to the Code).

Because of the internet and social media, publishers must also take extra care not to identify children indirectly through “jig-saw identification”, i.e., the ability to work out someone’s identity by piecing together different bits of information supplied by several features of the story or across articles or news outlets (the same can apply to adults, e.g., in cases where victims of sexual offences enjoy anonymity by law). The Code (Clause 3.2) requires publishers to consider using techniques or practices that remove identifying data (e.g., the area of a city where they live, their parents’ occupations or other unusual details that could lead to a child’s identification). This practice also helps publishers comply with minimum use requirements under data protection law.

Another significant change concerns the provisions on discrimination under Clause 4. The previous version of the Code stated that publishers would be found in breach if they incited hatred “against any group … [on any] characteristic that makes that group vulnerable to discrimination”. This reflected the legal standard under UK law, but it was not adequately enforced, particularly online. The revised Code holds publishers to stricter standards. Clause 4.3 reads: “Publishers must not encourage hatred or abuse against any group” based on those characteristics (emphasis added). The new wording lowers the threshold for what IMPRESS regards as discriminatory coverage and takes into account its potential effect not just on the communities, but on the society as a whole. This change, according to IMPRESS’ Deputy Chief Executive Lexie Kirkconnell-Kawana: “accounts for prejudice that could be more insidious and be more cumulative or more thematic, and not a direct call to action or violence against a group of people – because that’s an incredibly high threshold, and it’s not often how news is carried. You don’t see headlines saying […] ‘Take up arms against x group’.”

Clause 7 on privacy highlights that, when determining the privacy status of the information, publishers must give “due consideration to online privacy settings” (Clause 7.2(b)). Public interest justifications may, however, apply. The provision challenges the widely held misconception that information found or posted online is automatically made public or free to use. The Guidance to the Code acknowledges that an individual’s expectation of privacy may be weaker where no privacy settings are in place but clarifies that the absence of privacy settings will not necessarily prevent a breach of this Clause. It does not automatically mean that an individual consents to publishers or journalists publishing their content, which may reach an entirely different – or even wider – audience than the audience usually viewing the content on that individual’s account (paragraphs 7.1.4 and 7.2.6 of the Guidance to the Code).

Editorial responsibility and accountability with an outlook to the future

The new Code is the outcome of an intensive two-year review process, which involved consultation with academics, journalists, members of the public and industry stakeholders. Richard Ayre, Chair of IMPRESS, stated: “With more news, more sources, more publishers, more opinions than ever before, the opportunities for journalism are limitless. But nothing’s easier for a journalist to lose than public trust. This new Code sets the highest ethical standards for IMPRESS publishers, large and small, and whatever their point of view, so the public can confidently engage with the news of today, and tomorrow.”


This article was first published on the IRIS Merlin legal database. The original piece can be viewed here.

OFCOM Reports on Its First Year of Video-Sharing Platform Regulation

By Dr. Alexandros Antoniou, Lecturer in Media Law, University of Essex

Ofcom, the UK’s communications regulator, published its first report on video-sharing platforms (VSPs) since becoming the statutory regulator for such platforms established in the UK. This is the first of its kind under the VSP regime and reveals information previously unpublished by in-scope regulated companies.

Platforms’ compliance with the new VSP regime

Ofcom’s report outlines the regulator’s key outcomes from the first year of regulation (October 2021 to October 2022). Its findings stem from the use of the regulator’s statutory powers under section 368Z10(3) of the Communications Act 2003 to issue enforceable information requests to all notified VSPs.

Specifically, some platforms made positive changes to their systems and processes in light of new VSP requirements, e.g., TikTok’s dedicated online safety committee to provide oversight of content and safety compliance, Snapchat’s parental control feature, and OnlyFans’ age assurance tools for all new UK subscribers. However, Ofcom found that platforms provided limited evidence of how well their user safety measures operate, blurring as a result their effectiveness and consistency. It also emerged that some platforms are not adequately resourced, equipped and prepared for regulation. There is a clear need for some of them to improve the quality of their responses to the regulator’s information requests. Moreover, Ofcom found that risk assessment processes were not prioritised by platforms, despite their importance in proactively identifying and mitigating safety risks. Risk assessments, however, will be a requirement on all regulated services under future online safety laws that will eventually supersede the VSP regime. Finally, some adult VSPs’ access control measures were not found to be sufficiently robust in preventing children from accessing pornographic content.

Moving towards the second year of the implementation of the regime, Ofcom will dedicate most of its attention to the comprehensiveness of user policies (also known as Community Guidelines), including their application and enforcement; the availability of appropriate tools empowering users to tailor their online experience; and the implementation of suitable age verification (AV) mechanisms to protect children from harmful online content, including pornography.

To increase transparency of platform processes and raise awareness of how VSPs protect against harmful content, Ofcom’s report also sets out the measures adopted by some platforms to protect their users. The following platforms were reviewed in particular: TikTok, Snapchat, Twitch, Vimeo, BitChute, and some smaller VSPs including Fruitlab, ReCast Sport and Thomas Cook, as well as smaller adults VSPs like AdmireMe, FanzWorld and Xpanded. The report explains the governance processes within each regulated service (giving detail on their systems for online safety risk management) and the journey followed by users/ subscribers on each of these platforms.

Additional sets of research

Ofcom also made available a report on the VSP Landscape in the UK, describing the context in which providers apply protection measures. The report offers insights into: (a) who the notified VSP providers are; (b) how many users of VSPs there are in the UK and their demographics; (c) what the main business models used by VSP providers are; and (d) what information VSP providers make publicly available in their transparency reports.

With the aim of building its evidence base around the appropriateness of certain protection measures, Ofcom commissioned further sets of research to understand people’s experiences of using (and attitudes towards) safety measures on VSPs. The research explored a range of users’ perspectives, from parents (or carers) of children aged 6-17 to users of porn platforms.

More specifically, the VSP Parental Guidance Research looked at parents’ attitudes towards children’s online behaviours. In summary, it found that parents tended to perceive VSPs generally as having a constant and unregulated stream of content. Based on their current understanding and the information available to them, six in ten parents said they did not use parental controls on the VSPs that their child uses, because their child “did not need them”. Just over half of parents remembered seeing or receiving guidance on how to keep their child safe online from multiple sources (government websites being the most trusted). However, the study revealed that the process of finding information on online safety was described by many parents as overwhelming and often only prompted by a specific incident (e.g., school guidance, discovering their child was looking at inappropriate content). Parents were also appreciative of safety guidance from VSPs that was clear, digestible, accessible, and easy to understand.

An additional set of research, i.e., Adult Users’ Attitudes to Age-Verification (AV) on Adult Sites, found that, although there was broad support from adult participants for age assurance measures to prevent under-18s from accessing online pornography, UK adult sites were not doing enough to protect children. The biggest adult video-sharing site, OnlyFans, introduced new age verification in response to regulation (using third-party tools) but smaller sites based in the UK did not have sufficiently robust access control measures. Subscriber sign-on processes show that smaller UK-established adult VSPs have AV measures in place when users sign up to post content, but users can generally access adult content simply by self-declaring that they are over 18. Ofcom’s research showed that 81% of participants accepted AV measures where these were expected in general (e.g., whilst purchasing alcohol online or participating in online gambling). A similar proportion (80%) felt Internet users should be required to verify their age when accessing pornography online, especially on dedicated adult sites. The use of a credit card was the preferred means of AV for paid access to pornography. Serious concerns were expressed by participants about how user data might be processed and stored during AV processes to access pornography, reflecting a very low level of trust in the data privacy practices of adult sites.

These findings will inform Ofcom’s regulation of VSPs, including the rules on the protection of children, and its engagement with notified providers.


This article was first published on the IRIS Merlin legal database. The original pieces can be viewed here.

DCMS Report on Influencer Culture: Regulatory Gaps and Government Response to Calls for Reforms

Photo by Karsten Winegeart on Unsplash

By Dr. Alexandros Antoniou, Lecturer in Media Law, University of Essex

On 9 May 2022, the House of Commons Digital, Culture, Media and Sport Committee (which is responsible for scrutinising the work of the Department for Digital, Culture, Media and Sport and its associated public bodies, including the BBC) published its report on influencer culture, following the conclusion of its inquiry into influencers’ power on social media. Whilst acknowledging the benefits and the significant returns that influencer culture brings to the UK economy, the Committee emphasised that the industry needs to be given more serious consideration by the government. In the words of the DCMS Committee Chair Julian Knight MP, “as is so often the case where social media is involved, if you dig below the shiny surface of what you see on screen you will discover an altogether murkier world where both the influencers and their followers are at risk of exploitation and harm online”.

Devising a formal definition of the term ‘influencer’ is challenging, yet necessary in effectively enforcing rules and regulations. For the purposes of its report, the DCMS committee defined an influencer as “an individual content creator who builds trusting relationships with audiences and creates both commercial and non-commercial social media content across topics and genres” (para. no: 3). Influencer culture was taken to mean ‘the social phenomenon of individual internet users developing an online community over which they exert commercial and non-commercial influence’ (para. no: 1).

On the whole, the Committee found low rates of compliance with advertising regulation and concluded that employment protection has failed to keep up with the growth of online influencer culture, leaving those working in the industry unsupported and child influencers at risk of exploitation.

Four broad key issues pertaining to influencer culture emerged from the Committee’s inquiry, in particular.

Behind the camera

Despite the industry’s popularity, earning a living from social media influencing appears challenging. The report takes a look behind the scenes and goes beyond the superficial glamour and public perception, often involving paid-for holidays and free gifts. The report highlights that influencers face a range of challenges including hacking, impersonation, algorithmic unpredictability, mental health issues, online abuse, trolling and harassment. This appeared to be a bigger problem for women (compared to men) which is exacerbated by the “lack of developed support from the surrounding ecosystem of platforms, regulators, talent agencies and brands” (para. no: 15).

Transparency around pay standards and practice

Despite social media influencing being a rapidly expanding subsection of the UK’s creative industry, making a living in it remains difficult. Only few influencers appear to take the lion’s share of well-paid work, but many others struggle to make a living. Similar to other professions in the creative sector, many influencers classify as self-employed, which may mean that they experience uneven revenue streams and lack of employment protections (e.g., maternity or sick leave).

Moreover, the Committee points out the lack of payment transparency which has resulted in pay gaps between different demographic groups, affecting particularly influencers from ethic minority groups. Despite the fact that social media platforms understand the value that influencers bring to their business model, they do not always “appropriately and consistently” (para. no: 58) compensate influencers for the work that goes into producing content that attracts users.

The state of influencer compliance and gaps in advertising regulation

The scale of the sector and the volume of content generated across multiple platforms has outpaced the capabilities of UK advertising regulation. According to the UK’s Competition and Markets Authority, influencer compliance rates with UK advertising regulations remain “unacceptably low” (para. no: 74). Earlier in March 2021, the UK’s Advertising Standards Authority had reached similar conclusions in its research on influencer ad disclosure. The advertising watchdog’s report revealed a “disappointing overall rate of compliance” with its rules requiring ads on social media to be clearly signposted as such (see IRIS 2021-5/7 for more).

Despite platform-specific guidance on ad labelling and training for influencers, brands and agencies, the messaging around the rules on advertising transparency still lacks clarity and disclosure requirements are practiced with a high degree of variation. New entrants to the influencer marketplace, who may not receive adequate support, are still unaware of their obligations under the advertising rules.

Children as viewers and children as influencers

Influencer content on social media is becoming increasingly popular with children, but the close bond children develop with online figures leaves them at risk of exploitation. Evidence suggests that children are more vulnerable to native advertising as they find it challenging to distinguish and identify. Current advertising regulation does not appropriately consider their developing digital literacy and sufficiently address the need for enhanced advertising disclosure standards that meets children’s needs.

Furthermore, influencers may be financially incentivised to share “extreme content” (para. no: 104) that includes misinformation and disinformation which may affect children and other vulnerable groups susceptible to harms arising from this type of content. Influencer promotion of unattainable lifestyles and unrealistic beauty ideals was flagged as a particular issue, especially because its consistent message (i.e., ‘what you look like matters’) and the damaging pressure it generates are likely to contribute to mental health issues such as depression, anxiety, body dysmorphia and eating disorders. Currently, there is not enough regulation to protect children from this.

Concerns are expressed over the lack of protection for children participating in this new industry as successful influencers themselves (e.g., through gaming channels) and the impact this may have on their consent and privacy. Child influencers do not enjoy the same standard of protection around pay and conditions of work as traditional child performers in the entertainment industry. This is because child performance regulations do not currently apply to user-generated content.

Committee recommendations

In response to the issues identified earlier, the Committee makes a range of recommendations that call on the government to strengthen both employment law and advertising regulation. Specifically, the Committee recommends that the government: (a) conducts an industry review into the influencer ecosystem to address knowledge gaps; (b) develops a code of conduct for the industry as an example of best practice for deals between influencers and brands or talent agencies; (c) gives the ASA statutory powers to the enforce advertising standards under its Code of Non-broadcast Advertising and Direct & Promotional Marketing; (d) updates the same Code to enhance the disclosure requirements for ads targeted to audiences composed predominantly of children; and (e) addresses gaps in UK labour legislation that leave child influencers vulnerable to exploitation (including working conditions and protections for earnings).

Image via Shutterstock

The government response: no indication of a change in mood

On 23 September 2022, the House of Commons Digital, Culture, Media and Sport (DCMS) Committee, which is responsible for scrutinising the work of the Department for Digital, Culture, Media and Sport and its associated public bodies (including the BBC), published the government response to its report Influencer Culture: Lights, camera, inaction? (previously reported on IRIS 2022-7/18).

The Committee had found low rates of compliance with advertising regulation and concluded that employment protection had failed to keep up with the growth of online influencer culture, leaving those working in the industry unsupported and child influencers at risk of exploitation. It made a range of recommendations that called on the government to strengthen both employment law and advertising regulation.

The Advertising Standards Authority (ASA), which monitors advertisements across the UK (including influencer marketing) for compliance with advertising rules, as well as the Competition and Markets Authority (CMA), which enforces competition and consumer laws and has powers to conduct investigations in suspected violations of these laws in the market, submitted separate responses to the Committee’s recommendations earlier in July 2022.

Recommendations concerning the ASA and the CMA

The government welcomed the Committee’s recommendations on strengthening the ASA’s regulatory tools (e.g., to be given statutory powers to enforce its rules) but pointed to the work currently undertaken as part of its Online Advertising Programme, which aims to improve transparency and accountability across the online advertising supply chain. The government also agreed that the CMA should have more powers to enforce consumer protection law and stated that it will bring forward its Digital Markets, Consumer and Competition Bill (announced in the 2022 Queen’s Speech) to provide for regulatory changes (including giving CMA the ability to decide for itself when consumer law has been broken and to impose monetary penalties when breaches are established).

Influencer careers and influencer harassment

The government agreed with the Committee that pursuing a career as an influencer often came with challenges, including a worrying rise in the amount of online abuse, harassment and intimidation directed towards them. Reference was made to Online Safety Bill (OSB), which will require technology companies to improve their users’ safety and take action against online abuse and threats on their services. The Bill places, in particular, a statutory duty on in-scope services to operate complaints procedures that provide for “appropriate” action to be taken by the provider in response to relevant complaints (clauses 18(2b) and 28(2b)). Services will be thus expected to consider the nuances of different types of harm and the appropriateness of their action in response to the complaints they receive. However, the progress of the Bill towards becoming law has been (at the time of writing) paused, with some of its most controversial elements being subject to government review.

Influencer code of conduct

In its response, the government expressed strong support for the Incorporated Society of British Advertisers’ (ISBA) Influencer Code of Conduct, noting that the ASA had already published guidance for influencers which existed alongside the Code of Conduct for the Influencer Marketing Trade Body. The government agreed with the Committee’s proposal to develop a code of conduct which would complement ISBA’s existing work by promoting good practice in the coordination between influencers, brands as well as talent agencies. It is unclear though how the different codes of conduct and guidelines will work together effectively.

Media literacy and children influencers

Children are often unable to differentiate undisclosed advertising from other types of content they access on social media. The Committee had found in its report that both children and parents were not being adequately supported in developing media literacy skills to make informed choices online. Although the government appreciated the risk of children being exploited as consumers of influencer content, it referred to its ongoing work on the Online Media Literacy Strategy, which is designed to equip users with the knowledge and skills required to become more discerning consumers of information. The OSB is also intended to strengthen Ofcom’s (the UK’s communication regulator) media literacy functions by including media literacy within the new transparency reporting and information-gathering powers.

The government also recognised the regulatory gap in relation to safeguarding children acting as “brand ambassadors” themselves. Under existing law (i.e., section 37 of the Children and Young Persons Act 1963), a licence must be obtained before a child can legally participate in certain types of performance and activities in Great Britain (including for example any live broadcast performance or any performance recorded to be used in a broadcast or a film intended for public exhibition). However, this protection does not extend to user-generated content, e.g., where young people or a family record themselves and share it on social media. The government pointed out that the Department for Education is open to exploring legislative options that may provide more effective protection to children but there was no express commitment to this.

Overall, the government welcomed the Committee’s comprehensive inquiry into influencer culture and recognised that it shed much-needed light on the influencer ecosystem and its impact on both traditional and digital media. However, the government’s response provides little indication of what concrete frontline actions will be taken.


This post replicates articles published earlier on the IRIS Merlin legal database. The original pieces can be viewed in IRIS 2022-7:1/18 and IRIS 2022-10:1/17.

The New Harmful Communications Offence and the Online Safety Bill

Photo by Gilles Lambert

By Prof. Lorna Woods and Dr. Alexandros Antoniou, University of Essex, School of Law

There has been much discussion about the threshold at which the new offence in cl 151 of the Online Safety Bill (OSB) might bite. We demonstrate here that the threshold is, as it is intended to be, very high, a long way above mere hurt feelings. Indeed, this new offence would tighten up considerably the regime currently in force – to strike it out would maintain a lower threshold.

The Online Safety Bill, in addition to the regulatory regime, introduces a number of criminal offences, including two communications offences which are a reformulation of the existing s 127 Communications Act 2003 offences. They are not novel but rather seek to ensure that the criminal law is better fitted to the current online environment, and are focussed on the harm caused by these communications.

There are three communications offences, in addition to the cyber-flashing offence (cl 157):

  • Harmful communications offence (cl 151)
  • False communications offence (cl 152)
  • Threatening communications offence (cl 153).

This blog focuses on the first of these – the harmful communications offence.

What does it do?

It is a general harm-based communications offence to replace the current offences under s 1 of the Misuse of Communications Act 1988 and s 127(1) of the Communications Act 2003. It shifts focus from the content of a communication to its potentially harmful effects.

For a person to be prosecuted, there is a three-fold test to apply:

  1. there must have been a “real and substantial risk” that the message “would cause harm to a likely audience”
  2. the person sending the message intended that harm; and
  3. the defendant had no reasonable excuse for sending the message.

These elements must all be proven by the prosecution. The Government has tabled an amendment (NC13) which would exempt a ‘recognised news publisher’ (as defined in cl 50) from the offence in cl 151. At the time of writing, the amendment has not yet been debated.

How does this affect the threshold for criminal liability?

In its proposals to the Government, the Law Commission was clear that the new offences would set a higher threshold for criminal liability than the current rules do (para 1.35, para 2.82), though it may catch some material that would not have been caught but arguably should have been caught (the technically legal; see in particular para 1.5 and 1.6).

The Law Commission justifies raising the threshold not because it would necessarily be illegitimate to criminalise the content, but because it was unnecessary where there is a regulatory regime that deals with ‘harmful but legal’ content (para 2.9). There seems then to be a link between the higher criminal threshold and the existence of the legal but harmful provisions in the Online Safety Bill.

Looking at the threshold, cl 151(4) defines ‘harm’ as “at least serious distress”. According to the Law Commission, the use of the word “serious” was to indicate this raising of the threshold for the criminal offence. In its view, “serious” does not simply mean “more than trivial”. It means a “big, sizeable harm“.

The Law Commission notes that the term “serious distress” already features in the criminal law which allows “the courts to use existing experience and understanding” (para 2.52) as a starting point (the Law Commission expressly noted that this offence should not be bound to the harassment case law, para 2.81). It seems the threshold will be less than that of a ‘recognised medical condition’; nor need it have a substantial adverse effect on a person’s usual day-to-day activities. The Law Commission has also suggested that (once the offence is enacted) non-statutory guidance be given providing a non-exhaustive list of factors to be taken into account (para 2.83).

The Law Commission also views the fact that the offence requires that there be a risk of harm means that the offence is limited to where the harm is foreseeable by the defendant (as opposed to the possibility of actual harm no matter how unlikely). This means that there must be more than a mere risk or possibility of harm.  The requirement that there be a likely audience means that the risk of harm can be assessed in relation to the particular characteristics of the audience.

The other two elements noted above also operate to limit the scope of the offence. The DCMS has produced a factsheet on the new offence and provided clarification of how the harmful communications offence is intended to work. The intent to harm – or rather the lack of it – could be seen in the case of a call on Zoom to a doctor during which upsetting medical news is broken. There, the doctor was not intending to cause distress but to inform the patient of the facts.

The factsheet also suggests that political satirical cartoons would be unlikely to be caught by the offence: there is no evidence that the individual intended to cause at least serious distress; moreover, given the importance of political speech, it is likely that the cartoonist would be seen as having a reasonable excuse for sending the message. A similar point could be made about images from warzones.

It also gives the example of a tweet sent to the followers of the person tweeting, which says “I want to make my position on this issue clear, I do not believe that trans individuals are real women.” According to the factsheet, the person tweeting was contributing to a political debate, albeit a controversial one. This means that the person sending the communication has a reasonable excuse for sending it.

Advertising Watchdog Publishes Report on Tackling Harmful Racial and Ethnic Stereotyping in Ads

Photo by Yasin Yusuf

By Dr. Alexandros Antoniou, Lecturer in Media Law, University of Essex

On 3 February 2022, the UK’s regulator of advertising across all media, the Advertising Standards Authority (ASA), published its research in harmful racial and ethnic stereotyping in UK advertising. The survey highlighted a number of important issues that participant consumers raised about the depiction of people from different racial and ethnic backgrounds.

Ads that are likely to cause serious or widespread offence and/or harm owing to particular portrayals of race and ethnicity have long been regulated under the UK Code of Non-broadcast Advertising (CAP Code) and the Code of Broadcast Advertising (BCAP). Rule 4.1 of the CAP Code states that ‘Marketing communications must not contain anything that is likely to cause serious or widespread offence. Particular care must be taken to avoid causing offence on the grounds of age; disability; gender; gender reassignment; marriage and civil partnership; pregnancy and maternity; race; religion or belief; sex; and sexual orientation’. Equivalent provisions are found in Rule 4.2 of the BCAP Code. Marketers are urged to consider public sensitivities before using potentially offensive material and compliance is typically assessed with reference to several factors, including the context, medium, audience, type of product and generally accepted standards.

Advertising can play a role in legitimising stereotypes. Certain types of racial and ethnic stereotypes can, in particular, cause harm by creating a set of limiting beliefs about a person that might negatively affect how they perceive themselves, and how others see them. In the aftermath of the death of George Floyd (whose murder by a police officer in the US city of Minneapolis in 2020 sparked a global movement for racial justice and led to pressure for change across the world), the ASA has been reflecting on what further efforts could be made to address factors that contribute to Black, Asian and other minority racial or ethnic groups experiencing disproportionately adverse outcomes in different aspects of their lives.

As a first step, the regulator commissioned public opinion research in order to establish whether stereotypes associated with race and ethnicity can, when featured in ads, give rise to widespread or serious offence and/or contribute to real-world harm, such as unequal outcomes for different racial and ethnic groups. The research, which was conducted between March and June 2021, comprised two stages: a qualitative study that covered different interest groups, and a quantitative study that was designed to identify the extent to which attitudes and beliefs were held across individual communities and the UK as a whole. The research indicated that: ‘over half of Black, Asian and Minority Ethnic respondents felt that, when they were represented in ads, they are not accurately portrayed, and of those, just over a half felt people from their ethnic group are negatively stereotyped’.

Five categories of racial and ethnic stereotypes were identified by the research (some of which are interrelated):

  1. Roles and characteristics: overt or subtle stereotypical portrayals pertaining to appearance, behaviour, employment status, mannerisms, accent and preferences. Such portrayals may contribute to the homogenisation of vastly diverse groups and can be seen to reinforce or promote outdated views of a particular race or ethnic group.
  2. Culture: the exaggeration and mocking of accents, ‘lazy’ references to culture, cultural appropriation, and the use of imagery suggestive of colonialism.
  3. Religious beliefs and practices: repeated depictions of Muslim or Asian women wearing the hijab were seen by participants as ‘an easy stereotype that lacked authenticity’. There was, however, support for portrayals that did not draw specific attention to a person’s racial or ethnic background.
  4. Objectification and sexualisation: concerns were expressed about depictions of sexualised and/or objectified Black men and women as well as depictions that ‘fetishised and exoticised’ Asian women. However, positive portrayals of the diversity of body shapes and sizes were generally welcomed.
  5. Use of humour at the expense of other ethnic groups: making fun of a group or their appearance, culture or tastes, e.g., the use of different accents can be seen as mocking or ‘othering’ by reinforcing the idea that people from racial or ethnic minorities who speak with an accent are different from White or Western people.

Moreover, the research highlighted three potential types of harm that could develop from adverse portrayals of race and ethnicity:

  1. reinforcement of existing stereotypes through the repeated use of certain portrayals (often described as ‘always showing us the same way’, e.g., the casting of Asian men as shop keepers, waiters and taxi drivers or subtle reinforcements of a servile role). The perceived harm in relation to this was seen in making it easier for others to see people from racial or ethnic minorities as different to the mainstream (‘othering’);
  2. the emergence of new tropes which continue creating a one-dimensional picture of Black, Asian and other minority racial or ethnic groups; and
  3. perpetuating or implicitly reinforcing racist attitudes by depicting racist behaviour: such depictions were felt to pose a risk of evoking past trauma and reinforcing prejudice (even where it was understood that the advertiser’s intention was to challenge negative stereotypes within the messaging of the ad).

The research did not give the ASA reason to believe that its interpretation and application of the Codes’ rules were generally out of step with consumers’ and stakeholders’ opinions. The findings can, however, bring more clarity and valuable insights on the types of ads that pose a risk of causing harm and/or offence. At the end of 2022, the regulator will conduct a review of its rulings in this area to identify newly emerging areas of concern and ensure that it is ‘drawing the line in the right place’.

At this stage, it is not anticipated that a new targeted rule will be introduced into the Advertising Codes to ban the kinds of portrayals identified in the report. Nevertheless, the Committee of Advertising Practice (CAP) and the Broadcast Committee of Advertising Practice (BCAP), which are responsible for writing and updating the UK Advertising Codes, will consider whether specific guidance on racial and ethnic stereotypes is necessary to encourage creative treatments that challenge or reject problematic stereotypes and diminish issues arising from the repeated presentation of a specific race or ethnicity in a particular way. Finally, the research findings will be presented to industry stakeholders and training will be offered to support advertisers where necessary.


This article was first published on the IRIS Merlin legal database and is reproduced on the ELR Blog with permission and thanks. The original piece can be viewed here.

Libel Trial against Investigative Journalist Concludes Before the High Court: A Landmark Test of the Public Interest Defence

Carole Cadwalladr speaks at TED2019: Bigger Than Us (April 15 – 19, 2019, Vancouver, BC, Canada) Photo: Marla Aufmuth via Flickr

By Alexandros Antoniou, Lecturer in Media Law, University of Essex

On 14 January 2022, a high-profile libel trial began before Mrs Justice Steyn at the Royal Courts of Justice in London. The British businessman Arron Banks sued investigative journalist Carole Cadwalladr for libel. Mr. Banks is an outspoken backer of Brexit. Ms Cadwalladr is an award-winning journalist who writes for the Guardian and Observer in the United Kingdom. She is particularly known for her work in uncovering the Cambridge Analytica scandal.

The case arose out of remarks in a Ted Technology Conference titled ‘Facebook’s role in Brexit – and the threat to democracy’ given by Ms Cadwalladr in April 2019, and a related Tweet. In the course of the Ted talk, which centred on the UK’s 2016 vote to leave the European Union, she said: “And I am not even going to go into the lies that Arron Banks has told about his covert relationship with the Russian Government”.

Arron Banks has always strongly denied any illegal Russian links, but he has admitted meeting Russian embassy officials on a number of occasions. Although his Leave.EU campaign was fined GBP 70,000 over multiple breaches of electoral law, the National Crime Agency’s investigation found no evidence of criminal activity.

Proceedings were issued on 12 July 2019. In a preliminary ruling on the meaning of Ms Cadwalladr’s words, Mr. Justice Saini held on 12 December 2019 that an average ordinary listener would have understood her words to mean: “On more than one occasion Mr. Banks told untruths about a secret relationship he had with the Russian Government in relation to acceptance of foreign funding of electoral campaigns in breach of the law on such funding.”

Mr. Banks maintained in his legal claim that the threshold of ‘serious harm’ under section 1 of the Defamation Act 2013 had been met in terms of damage to his reputation. Ms. Cadwalladr stated that this was not the meaning she had intended and that she had always taken care to say there was no evidence to suggest Banks had accepted any money. She originally pleaded the defence of ‘Truth’ under section 2 of the 2013 Act but, after Mr. Justice Saini handed out his ruling on the meaning her statement bore, Ms. Cadwalladr withdrew this defence in November 2020. She is now relying on the defence of ‘Publication on a matter of public interest’ under section 4 of the Act.

The defence under section 4 reflects principles established by previous case- law. It consists of two elements: Section 4(1)(a) requires that the words complained of were (or formed part of) a statement on a matter of public interest, and if the publication in question passes this test, then it also needs to meet the requirement of section 4(1)(b), which contains objective and subjective components.

The subjective component is that the defendant must believe the publication was in the public interest and the objective component is the question of whether it was reasonable for the defendant to hold that belief. Section 4(2) of the 2013 Act requires in particular that, in determining these matters, the court ‘must have regard to all the circumstances of the case’.

Thus, the central issue at this trial is likely to be whether it was reasonable for Ms. Cadwalladr to believe that the publication of her statements was in the public interest. The court will also look at the content and subject of the allegations, and the way the journalist acted in researching and reporting them. If Ms. Cadwalladr loses, she faces legal costs of up to GBP 1 million on top of damages.

In a piece published by Open Democracy, Ms. Cadwalladr stated: “Right now, we can’t police the money spent in our elections: this is a massive problem for our democracy. Facebook is unregulated and our electoral laws are still hopelessly unenforceable. There was (and still is) a huge public interest in journalists raising these issues – both as a warning for us here in Britain, and for countries everywhere”.

An interesting aspect of this case is that Arron Banks sued neither the Guardian Media Group which published Ms. Cadwalladr’s reporting for years nor TED which hosted her talk (or other large media outlets which made similar allegations). Instead, he chose to sue Cadwalladr personally. Press freedom groups have called for the case to be thrown out and described it as bearing many of the elements of a so-called SLAPP lawsuit – Strategic Litigation Against Public Participation. A key characteristic of such types of actions is the disparity of power between the claimant and the defendant.

The case has renewed calls for the UK Government to ensure that SLAPPs are not used to silence legitimate criticism and stifle any public interest reporting. Action to combat the emergence and growth of abusing litigation targeting journalists throughout the EU and ensure convergence in Member States’ approaches to SLAPPs is currently being considered at the EU level.

The Banks v Cadwalladr trial was heard over five days and judgment was reserved. The case has been followed closely by several investigative reporters. The Reporters Without Frontiers emphasised in particular that “the ruling will have serious implications for journalism not only in the UK, but internationally, given the popularity of London courts as a jurisdiction for such suits, and highlights the need for greater protections for journalists facing legal threats”.


This article was first published on the IRIS Merlin database of the European Audiovisual Observatory and is reproduced on the ELR Blog with permission and thanks.

Ofcom clears ITV for Piers Morgan’s controversial comments about Meghan Markle

Prince Harry and Meghan Markle going to church at Sandringham on Christmas Day 2017 | Source: Wikimedia Commons

Dr. Alexandros Antoniou, School of Law, University of Essex

On 1 September 2021, Ofcom, the UK’s communications regulator, rejected a record of complaints about Piers Morgan’s comments on Good Morning Britain in the wake of the Duke and Duchess of Sussex’s interview with Oprah Winfrey.

Good Morning Britain (GMB) is a weekday morning news and discussion programme broadcast on ITV. On 8 March 2021, GMB was dominated by the interview between Oprah Winfrey and the Duke and Duchess of Sussex which had been broadcast overnight in the USA. Excerpts from the interview had been made publicly available ahead of its full broadcast in the UK that evening. The programme included a report on how the US was reacting to the interview and focused on two parts which revealed that the Duchess had contemplated suicide and that an unnamed member of the Royal Family had raised concerns about “how dark” her son’s skin colour might be.

The following day, the lead presenter Piers Morgan made it very clear during the show that he did not believe a word of what Megan Markle had said, adding that if she read him a weather report, he wouldn’t believe it. Mr. Morgan stormed off the GMB set after clashing with weather presenter Alex Beresford over his controversial remarks. By the end of the day, the mental health charity Mind had released a statement showing their deep concern over the statements aired in the show. This was rather awkward for ITV because of their 2021 Get Britain Talking mental wellness campaign, in which Mind is a partner. A strong public reaction ensued. Ofcom received more than 57,000 complaints about Mr. Morgan’s comments on GMB, making it the most complained about TV show in Ofcom’s history. The same evening, ITV announced that the GMB host resigned from his role on the show after six (often confrontational) years.

The complaints received by the regulator can be grouped under two main categories. The first category related to concerns about Morgan’s statements on the Duchess of Sussex’s revelations about her mental health and suicidal feelings. The second category related to concerns about the presenter’s dispute of the Duchess’ personal account of her experiences of racism within the Royal Family during her time as a senior royal. The programme in question raised issues under Section Two of the regulator’s Broadcasting Code which outlines standards for broadcast content in respect of harm and offence.

In particular, the rules engaged were Rule 2.1 which provides that “generally accepted standards must be applied to the content of television and radio services […] so as to provide adequate protection for members of the public from the inclusion in such services of harmful and/or offensive material” and Rule 2.3 which requires that broadcasters must ensure that potentially offensive material is justified by the context. Under the latter, racist terms and material should be avoided unless their inclusion can be justified by the editorial content of the programme.

As far as the discussion of mental health and suicide in the programme is concerned, Ofcom held in a 97-page-long ruling that Piers Morgan was entitled to hold and express strong views that scrutinised the veracity, timing and possible motivations behind the allegations made by the Duke and Duchess of Sussex. Their interview was a major international news story that was a legitimate subject for debate in the public interest. Restricting such views would be “an unwarranted and chilling restriction” to the broadcasters’ right to freedom of expression and the audience’s right to receive information and ideas without undue interference (Article 10 of the ECHR). However, while the Broadcasting Code does not seek to curb broadcasters’ right to include contentious viewpoints, compliance with the Code’s rules must be ensured.

The regulator expressly acknowledged that Piers Morgan’s statements of disbelief of Meghan Markle’s suicidal thoughts had the potential to cause harm and offence to viewers. Without adequate protection by broadcasters, audience members (some of whom were likely to place weight on the presenter’s opinions) may have been discouraged from seeking mental health support for fear of facing a similar reaction. As the Chief Executive of Mind explained in the charity’s statement: “[…] when celebrities and high-profile individuals speak publicly about their own mental health problems, it can help inspire others to do the same. Sharing personal experiences of poor mental health can be overwhelming, so it’s important that when people do open up about their mental health they are met with understanding and support.”

Ofcom underlined their concerns about Mr. Morgan’s apparent disregard for the seriousness of anyone expressing suicidal thoughts, but nevertheless took the view that the robust and direct challenge to his comments from other programme contributors provided important context for viewers throughout the programme. “Overall, adequate protection for viewers was provided and the potentially harmful and highly offensive material was sufficiently contextualised,” Ofcom concluded. Thus, on balance, the programme was not found in breach of Rules 2.1 and 2.3 in respect of the discussion on mental health and suicide. Although the regulator ruled in Mr. Morgan’s favour, it reminded ITV to be more cautious when discussing sensitive issues around mental health, e.g., through the use of timely warnings or signposting of support services.

A similar reasoning was followed in relation to the second category of complaints about race. Ofcom considered that the conversations in the programme provided an open and frank debate on the nature and impact of racism, about which there is a high public interest value. Given the seriousness of the allegations made in the interview to Oprah Winfrey, it was legitimate to discuss and scrutinise these claims. The programme included, however, several contributors who could speak “decisively and with authority” on racial issues, meaning that a range of views was represented, and Mr. Morgan’s comments were directly challenged on several occasions. Despite the strong opinions expressed in the programme, which could be highly offensive to some viewers, any potential offence was justified, according to the regulator’s view, by the broader context; hence, the comments were not found to be in breach of Rule 2.3 of the Code.

Speaking at a Royal Television Society conference in September 2021, the Chief Executive of Ofcom Dame Melanie Dawes defended the regulator’s ruling as a “quite a finely balanced decision” but “pretty critical” of Piers Morgan. However, BBC presenter Clive Myrie, who interviewed Dame Dawes at the event, told her: “The media forums that I’m on, which include a lot of black broadcasters and producers and people in the industry, were very upset at the Ofcom ruling concerning Piers Morgan, which was about his comments and views on mental health issues, but that race element is there. And their sense is that it [Ofcom] is too white an organisation and would never understand why that ruling was so upsetting to so many people.”

Piers Morgan was recently nominated for best TV presenter at the 2021 National Television Awards. On 15 September 2021, it was reported that he would be joining a Rupert Murdoch-owned network as a host of a new show that is planned to air in the US, UK and Australia.


This piece was first published on the IRIS Merlin legal database and is reproduced on our blog with permission and thanks. The original article can be accessed here.

Who Killed the Radio Star? How Music Blanket Licensing Distorts the Production of Creative Content in Radio

Photo by Eric Nopanen

According to popular and scholarly belief, video killed the radio star. The golden age of radio, culminating in the 1930s and 1940s, was gone with the rise of television in the 1950s and 1960s.

In their new article, titled ‘Who Killed the Radio Star? How Music Blanket Licensing Distorts the Production of Creative Content in Radio’ and published in the American University Law Review, Dr. Eden Sarid, Lecturer in Law at the University of Essex and Prof. Ariel Katz, Associate Professor at the Faculty of Law, University of Toronto, advance the argument that television’s role in the “death” of the radio star has been more limited than commonly believed.

A major culprit, the authors argue, is the common licensing practice of musical content for broadcasting, or more precisely, the blanket license issued by copyright collective management organizations (CMOs). CMOs offer all-you-can-eat blanket licenses that allow broadcasters to use as many songs from the CMO’s repertoire as they like for a fixed fee.

Thus, by setting a zero marginal price for broadcasting additional songs from the CMO’s repertoire, CMOs’ blanket licensing drives commercial radio stations to dedicate a larger portion of their programming to the broadcasting of recorded songs and to reduce the time and resources spent on producing or procuring other content.

The article then reveals that the analysis of blanket licenses should not be limited to their static effects (i.e., the trade-off between lower transaction costs and supra-competitive pricing), but it should also include the dynamic effect of blanket licensing on the type and quality of content production.

This dynamic effect also poses a challenge for copyright law and policy: while collective licensing may be beneficial to one class of copyright holders, it may hinder the production of other content and harm creators of such content, by depriving them of important opportunities for market and cultural participation.

Moreover, the article provides a novel explanation for the well-documented phenomenon of the “death” of the radio star and re-evaluates some of the existing explanations.

Finally, the authors discuss some alternative models for music licensing that can mitigate the distortion created by blanket licenses.

A copy of the article can be accessed on the University’s research repository.

Nevermind at 30: Why the Nirvana Baby Lawsuit is a Warning for Parents

Photo by Jurian Kersten

Alexandros Antoniou, Lecturer in Media Law, University of Essex

Nirvana’s album Nevermind has reached its 30th anniversary and is under more scrutiny than ever as a result of a lawsuit recently filed by the former cover-star.

Spencer Elden, the underwater baby tempted by a dollar bill on a fishhook, is suing the band and Kurt Cobain’s estate for having “knowingly produced, possessed, and advertised commercial child pornography”. The claim states that the band benefited financially from their participation in his “sexual exploitation”. Elden now seeks a civil remedy of US$150,000 per defendant for the “lifelong damages” he claims to have suffered.

Originally inspired by Cobain’s fascination with waterbirths, it has been said the cover can be interpreted as a comment on the values society imparts to the youth. The same picture is, however, interpreted differently in the lawsuit which attempts to weave in the idea that the image was designed to elicit a sexual response from viewers.

It goes so far as to suggest that Cobain “chose” the image depicting Elden “like a sex worker – grabbing for a dollar bill that is positioned dangling from a fishhook in front of his nude body”.

The legal argument

Under US federal law, a key factor in distinguishing between the artistic cover and illegal explicit content is whether the depiction of the minor constitutes a “lascivious exhibition” of their intimate parts – in other words, a depiction designed to excite sexual stimulation in the viewer. Also, any determination of lasciviousness must be based on the depiction taken as a whole, with its overall content and context in mind.

Elden is likely to face an uphill struggle in persuading a court that the cover is deliberately focused on the baby’s genitals and that the creators intended to elicit a sexual response – as the first thing most people probably notice is the underwater background.

But, even if he was successful on the child pornography ground, the difficult question would arise of whether fans who own or have downloaded the album with its cover art have copies of a child sex image and so have committed a crime. https://www.youtube.com/embed/I0HZqUsVgQA?wmode=transparent&start=0

The lawsuit also suggests that Elden has suffered a “loss of enjoyment of life” and had his privacy violated. But it could be pointed out that Elden has previously acted in ways that continue to cement his connection with the band. He has re-enacted the cover to honour the album’s past anniversaries and attended events to sign album covers.

Although it’s not unusual for people to reconsider the impact of their experiences from early life, the fact that Elden leaned into the public sphere and seemingly relished his involvement with the album may dilute the strength of his claims.

Couldn’t consent

Elden’s parents were reportedly paid US$250 for the photo shoot. Presumably, this was a standard rate for an unknown model rather than taking into account what the image would be used for.

It is uncertain whether this money was passed down to Elden. He has expressed his bitterness about having never directly profited from his involvement in the Nevermind project. As his parents’ deal cannot now be renegotiated, some might dismiss his current lawsuit as an attempt to get compensation for the commercialisation of his image.

At the core of Elden’s lawsuit is the fact that the band’s team got his parent’s consent before photographing him. Though of course being a baby, Elden did not have any choice. And from this perspective, Elden’s case is a useful reminder for parents to think about the types of images they share online.

A warning to ‘sharents’

A lot has changed since the release of Nevermind in September 1991. With the rise of social media sites and photo-sharing networks, the average parent today is said to share over 1,000 images of their child online before their fifth birthday. Compared to the Nirvana baby album cover, images shared online nowadays are even harder to control.

Indeed, a recent study found that 42% of teenagers in 25 countries are troubled by what their parents post about them on social media.

Although some steps have been taken to protect children’s privacy online – such as the introduction of the Children’s Code which applies to digital services that target minors – the law is not clear as to whether a child’s right to privacy is essentially lost when parents share their images online.

The legal avenues currently available do not guarantee protection against parental “over-sharenting” either, meaning that so-called “generation tagged” may have to live with the longevity of their digital footprint – often attached to them without their consent.

Elden has previously addressed the popularity of the iconic cover and he appears conflicted about it. His ambivalence about the image may be valid. The public’s perception of the album and the visceral feelings attached to its success should not discourage a dispassionate and neutral legal assessment of whether the photograph is unlawful.

But the Nirvana baby lawsuit also serves as a timely reminder to parents to think carefully of the digital shadows they may create for their children. Indeed, parents cannot simply have a “nevermind” attitude to what they share online.


This article is republished from The Conversation under a Creative Commons license. Read the original article.