The Anatomy of Impact: A Conversation with Professor Lorna Woods

Photo by Joshua Hoehne on Unsplash

By Professor Carla Ferstman, Director of Impact, Essex Law School

As academics, we conduct research for all sorts of reasons. We seek to advance knowledge and innovation in the areas in which we specialise, and we try to make connections with research being done in other disciplines for the purpose of enhancing our understanding of and contributing to address cross-cutting, complex challenges.

Academic research is increasingly being applied outside of academia to foster external impacts in our communities and societies. Research-led teaching can also foster the opportunities for cutting-edge, student learning.

The UK Research Excellence Framework values world-leading research that is rigorous, significant and original. It also encourages and rewards research that generates impact, which it understands as “an effect on, change, or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia” (REF2021).

Impactful research is particularly relevant and important for the discipline of law, where colleagues’ work can lead to changes in how justice is perceived and how access to justice can be better achieved. Academic research in law has led to and influenced the direction of law reform and academic findings have also been applied authoritatively in court judgments. Legal research has also led to the development of new policies, and regulatory frameworks in the UK and internationally.

Despite the importance many legal academics place on generating impact, the route to impact is not obvious. Achieving impactful academic research defies a one-size-fits-all formula, though certain key pointers are invaluable:

First, impactful research is generated by academics who produce excellent, groundbreaking research.

Second, academics should be mindful of who (e.g., community stakeholders, policy-makers, decision-makers) would benefit from knowing about the research and should develop a strategy to ensure they effectively disseminate their findings.

Third, academics seeking to generate impactful research should be actively engaging with those who can benefit from their research, adapting their approach based on stakeholder needs and circumstances.  

Learning from example

Academics can glean wisdom from exemplary models. And there is no better example than Professor Lorna Woods, whose research contributed significantly to the Online Safety Bill (now Online Safety Act 2023) and led to her being awarded an OBE for services to internet safety policy.

I sat down with Professor Woods to get a clearer understanding of her trajectory – how she got from A to B to C (or indeed, from B to A to F to C), to better appreciate the time her ideas took to percolate and the challenges she faced along the way.

I wanted to understand whether her research was picked up by government by happenstance, by carefully, plodded planning, or some other combination. I also wanted to know whether there was any magic formula she could share to generating impactful research.

Lorna qualified as a solicitor and worked in the early 1990s for a London city firm, where she was exposed to a variety of areas of law, including international trade, competition, and commercial law. She began to work with two of the partners on matters involving regulation, intellectual property, and media. She happened to be at the firm when many developments  in the law occurred, such as the Broadcasting Act 1990, up-dates in data protection rules, and other changes as a result of growing public access to the internet.

This quickly developed into a specialism related to technology. “The work was really interesting. It wasn’t just the typical due diligence or deals management work that one often received in a corporate solicitor’s firm, there was a space to think and a space to have your say”.

Also, during this time, Lorna did some consulting work for the European Commission in Eastern European countries following the political changes in the early 1990s, focused on media freedom and public service broadcasting, which involved new thinking about the rights of the public audience that had not yet been theorised.

Lorna left the firm after about five years when, as often happens, she began to take on a more supervisory role, with some of the most interesting pieces of work being delegated to more junior colleagues. She pursued an LL.M degree at the University of Edinburgh (legal theory and human rights, with a dissertation on federalism and the European Union) and began to apply for academic roles. She secured a position in 1994 at Sheffield and began teaching EU and public law.

The Eureka moment or more of a slow-burner?

Gradually Lorna’s research began to drift back to media law and data protection, incorporating areas she had been studying around human rights, public speech, surveillance, and the rights of journalists, but with her own take. She recalled that “A lot of people were talking about journalists’ rights, but I was focussed on the rights of the companies who were transmitting; an ‘essential facilities’ argument but approached from a rights perspective. I also started looking at these issues from the perspectives of EU law and the free movement of cultural standards [the rights of the audience] rather than simply as an issue of freedom of expression.”

Central to this was the idea that there were different actors in an information environment – the speakers and the audience, and something in the middle which had more to do with the platform, that is not really seen or thought about. The question Lorna had was whether these entailed separate rights or were all part of a unified right to information.

In 2000, Lorna was collaborating with Professor Jackie Harrison at Sheffield and they began researching new media and media regulation, and again, this is where she conceptualised further her thoughts on the rights of the audience not only to have access to information, but to information that was reasonably reliable, and where possible, to a diversity and plurality of sources.

This also connected to her thinking about how to find information on the internet, who curates what we can find and what responsibilities may be attached to the curation. The flip side to this was considering the nature of states’ positive obligations to provide a safe online environment. Lorna also began to explore issues around usergenerated content.

In response to the growing awareness of how female politicians and activists were being targeted on Twitter (now X), and the notoriety of the abuse faced by Caroline Criado Perez and Walthamstow MP Stella Creasy, Lorna started looking at what controls were in place, and began to consider the gaps in regulation and how they could best be addressed.

At the time, she observed that politicians had embraced Twitter, amplifying their influence while also making them more accessible and exposed. The platform facilitated direct communications between everyone on the network, including with unsavoury individuals who were using the platform as a form of abuse. This was fuelled by anonymous accounts, hashtags that allow you to jump on the bandwagon, and little seeming moderation at that stage. There were many instances of public-facing women receiving rape and death threats.

In consequence, there were several instances in which users were being charged in the UK under section 127 of the Communications Act – a low-grade offence which criminalises the sending, via a “public electronic communications network”, of a message which is “grossly offensive or of an indecent, obscene or menacing character”. But it was never clear to Lorna that using the criminal law was the best solution to the problem.

The campaign for law reform begins to take shape

Around 2015, Lorna became aware that the then Labour MP Anna Turley MP was developing a private member’s bill:  the Malicious Communications (Social Media) Bill. Someone whom Lorna had met in an unrelated capacity – “this is just really a feature of when you work in a certain area, you meet people linked to that area. And progressively, your army of contacts comes back to help” – William Perrin, managed to get her in the door to meet the MP.

Together, Lorna and William helped to draft the Bill. The goal was to give users better tools (user empowerment features and functionalities) so that they could filter and triage incoming content, at least as a starting point for improving the online environment. Their advice (which was taken on board) was not to remove platform immunity for third-party content; they recognised that the platform providers were offering an important service worth protecting.

Part of the rationale for this was the connections they saw between internet platform providers and telecoms providers: “If you were to hold a telecoms provider responsible for anything communicated on the service, they would become very cautious and ultimately it would shut down the service.  So, there was a need for caution.” Ultimately the Bill did not progress because private members’ bills rarely do but they operate to bring matters to the attention of the Government and can be part of a campaign for change.

Subsequently, the Government published a Green Paper on internet safety in 2017, where significant concerns were raised. This was the era of Cambridge Analytica and misinformation, but there were also concerns about child pornography and online bullying, and the algorithms prioritising content to vulnerable users stemming from the tragic Molly Russell case.  The Green Paper seemed to revisit the recommendation to remove (or significantly restrict) platform immunity for third-party content, which Lorna and William did not think was the best approach, for the reasons already stated.

There was a need to conceive of the problem at the systems level, rather than merely focusing on isolated items of content. For example, the scale of the problem invariably was not about the individual offensive posts but that the content was quickly able to go viral without appropriate controls, aided by functions like the “like” button, and the availability of anonymous, disposable accounts.

Similarly, the recommender algorithm which optimised certain posts for engagement tended to privilege the most irrational, emotional posts which were more likely to promote hatred or cause offence. Making small changes to these kinds of features and investing more in customer response, could significantly improve online safety.  Thus, according to Lorna, there was a certain recklessness in the product design that needed to be addressed – this was the genesis of the idea of a statutory duty of care. 

Paws for thought: remembering Faith, Lorna’s beloved cat who ‘Zoom-bombed’ video calls during lockdown and contributed much to debates on online safety

The statutory duty of care

Lorna and William produced a series of blogs and papers outlining this position, and the need for such reforms was also underscored by Lorna during an oral evidence session at the House of Lords inquiry into the regulation of the internet. The Carnegie UK Trust stepped up to champion Lorna and William’s work, facilitating its progress.

The UK Department for Culture, Media and Sport (DCMS) invited Lorna to give a briefing, and it became clear that there was some confusion. The DCMS had been under the impression that the conditionality of the platform immunity amounted to a statutory duty of care. Consequently, part of what Lorna and Will tried to explain was how their proposal was compatible with the principle of platform or intermediary immunity. The proposal was not seeking to impose liability on the platform for user content but instead, focused on requiring platforms to ensure product design met their duty of care to users. These discussions with DCMS continued, and progressively intensified.

The White Paper which was ultimately released in April 2019 clearly articulated that “The government will establish a new statutory duty of care to make companies take more responsibility for the safety of their users and tackle harm caused by content or activity on their services,” and outlined what that duty of care would look like and how it would be regulated.  

Changes within the Tory leadership ultimately delayed progress. There were also concerns raised by some of those in the free speech lobby who saw parts of what was being proposed as censorship.  Lorna’s background in freedom of speech helped her respond to those concerns: “I was concerned that freedom of speech was being used as a slogan. When you look at any right and you look at it in isolation, you are then implicitly privileging it. And here, it was important not just to consider the rights of the ‘speaker’ but the rights of all the other users as well, some of whom are extremely vulnerable.” 

These points align with what the UN Special Rapporteur on Freedom of Opinion and Expression explained in her 2023 report on gendered disinformation, who notes, citing Lorna’s submission, that “Systemic regulation, which emphasizes ‘architecture over takedown’, allows for more proportionate responses and is likely to be better aligned with freedom of expression standards.”

Certainly, companies were lobbying in other directions and the Act reflects some corporate compromises, such as the need for the duty of care to be applied proportionately, to account for the different levels of resources of the regulated company. But there were powerful counter-arguments, and the NSPCC and other organisations were effective allies particularly on the need for clear duties of care in relation to child users. The Daily Telegraph also ran an important campaign on the legislation. The Government at one point sought to restrict the Act to concerns about children, so this became part of the campaign to maintain a focus also on harm to adults (unfortunately only limited protections were maintained). There are other parts of the Act which differ from what Lorna and William had proposed, such as dividing up the regulatory framework by reference to certain types of conduct. Inevitably there were compromises.

The Act as adopted envisages that the communications regulator Ofcom will produce guidance and codes which will explain what internet platforms must do in order to operate in the United Kingdom. There are ongoing consultations regarding these texts. Once the guidance and codes are in place, companies will be given a period (three months) to align their practice to comply with the requirements. Thereafter, the duties of care will become binding.

Some of the companies appear to be arguing that a duty of care is too vague a standard, however this is hard to accept, given that it is a recognised legal standard. The goal for Lorna and others is therefore to ensure that the duty of care standard is made operational in such a way that it provides clear and adequate protections; it should be more than a ‘tick the box’ exercise.

I asked Lorna how this legislation would tackle the activities of companies operating outside of the UK, but with impacts in the UK. She explained that parts of the Act have extraterritorial effect, to the extent that company activities are directed at or have impacts in the UK. Some companies have introduced policies for different geographical regions to address the requirements of national legislation, so this is a possibility for multinational internet platforms accessible to UK users.  

I also discussed with Lorna whether she believed individuals like Molly Russell would be more effectively safeguarded now that the Online Safety Act is in force. She explained that Molly would not be better off today, because the guidance and codes are not yet in place. “Maybe in a year’s time, she would probably be better protected, as a child. I think an 18-year-old Molly would be sadly let down by the regime, which should be more robust.”

Given the clear synergies with her work on the Act, Lorna is also progressing with work on online gender-based violence, and some work on gender-misinformation, incel and extremism. As she looks deeper into these critical areas, it becomes evident that her ongoing endeavours reveal new challenges and fresh avenues for advocacy and change.

New communications offences enacted by the Online Safety Act 2023

Photo by Ravi Sharma on Unsplash

Dr. Alexandros Antoniou, Essex Law School

The Online Safety Act 2023 (OSA) introduced a range of measures intended to improve online safety in the UK, including duties on internet platforms about having systems and processes in place to manage illegal and harmful content on their sites. On 31 January 2024, Part 10 of the Act came into effect, introducing a series of new criminal offences which represent a significant leap forward in tackling complex challenges surrounding online communications safety.

Section 179 of the OSA establishes the criminal offence of sending false communications and seeks to target, among others, internet trolls. It is now deemed an offence if an individual (a) sends a message containing knowingly false information; (b) intends, at the time of sending, to cause non-trivial psychological or physical harm to a likely audience; and (c) lacks a reasonable excuse for sending the message. Recognised news publishers and broadcasters are exempt. The offence does not apply to public screenings of cinema films either. It can be committed by individuals outside the UK if they are habitually resident in England, Wales, or Northern Ireland. Penalties include imprisonment for up to six months, a fine, or both. It is hoped the new offence will help clamp down on disinformation and election interference online.

Section 181 establishes the criminal offence of sending threatening communications. This is committed when an individual sends a message containing a threat of death, serious harm (e.g. bodily injury, rape, assault by penetration), or serious financial loss, with the intent to instil fear in the recipient that the threat will be carried out (whether by the sender or someone else). In cases of threats involving financial loss, a defence is available if the threat was used to support a reasonable demand, and the sender reasonably believed it was an appropriate way to reinforce that demand. This offence applies to individuals residing in England, Wales, or Northern Ireland, even if the sender is located outside the UK. Penalties include up to five years of imprisonment, a fine, or both. In March 2024, Essex law enforcement achieved a significant milestone by obtaining one of the earliest convictions under the new OSA, resulting in an eight-month jail sentence for Karn Statham. Statham harassed a woman by sending threatening messages and making repeated visits to her address after being instructed to cease contact.

A new criminal offence under section 183, dubbed “Zach’s law”, aims to protect people from “epilepsy trolling”. The campaign against such conduct began when eight-year-old Zach, who has epilepsy, was raising funds for the Epilepsy Society. Trolls inundated the Society’s profile with images and GIFs meant to induce seizures in people with epilepsy. While Zach was unharmed, others with the condition reported seizures after engaging with the fundraiser online. The Act creates the offence of deliberately sending or showing flashing images to individuals with epilepsy with the intent to cause harm, defined as inducing a seizure, alarm, or distress. Particular conditions (specified in the Act) must be met before a conviction is secured, both in respect to sending and showing flashing images electronically. Recognised news publishers, broadcasters, public screenings of cinema films as well as healthcare professionals cannot be guilty of this offence (which can similarly be committed by individuals outside the UK if they are habitually resident in England, Wales, or Northern Ireland). Penalties include imprisonment for up to five years, a fine, or both.

Moreover, section 184 outlaws encouraging or assisting serious self-harm. To be guilty of this offence, an individual must perform an act intended to encourage or assist serious self-harm in another person, whether through direct communication, publication or sending (or giving) items with stored electronic data. Serious self-harm encompasses actions leading to grievous bodily harm, including acts of omission such as encouraging someone to neglect their health regimen. The identity of the person harmed need not be known to the offender. The offence can occur regardless of whether self-harm is carried out and it is irrelevant who created the content in question (it is the sending that matters). The offence is punishable by imprisonment for up to five years, a fine, or both, and likewise, it applies to individuals habitually resident in England, Wales, or Northern Ireland, even if they are outside the UK.

Cyber-flashing on dating apps, AirDrop and other platforms will also result in perpetrators facing up to two years in prison. Section 187 of the Act introduces a new offence under the Sexual Offences Act 2003 pertaining to the sending of photographs or films of a person’s genitals to another individual. A person (A) is deemed to commit the offence if they intentionally send or provide a photo or video of another person’s genitals to another individual (B) under the following conditions: either A intends for B to view the genitals and experience alarm, distress, or humiliation; or A sends or provides such material with the aim of obtaining sexual gratification and is reckless as to whether B will experience alarm, distress, or humiliation. “Sending” covers sending through any means, including electronic methods, showing it to another person, or placing it for someone to find. A conviction for this offence could also lead to inclusion on the sex offenders’ register. In February 2024, an Essex Police team secured the UK’s first cyber-flashing conviction, with Nicholas Hawkes pleading guilty to sending explicit images via WhatsApp to cause distress. On 19 March 2024, Hawkes was sentenced to 66 weeks in prison. He was also made subject to a restraining order for 10 years and a Sexual Harm Prevention Order for 15 years.

Finally, the OSA repeals the legislation first introduced to tackle ‘revenge porn’ offences (sections 33-35 of the Criminal Justice and Courts Act 2015) and introduces a set of intimate image sharing offences. Specifically, section 188 of the OSA introduces a new base offence of sharing of intimate images without consent, carrying a penalty of imprisonment for up to six months. This applies when an individual intentionally shares an image portraying another person in an intimate context without their consent and without a reasonable belief in consent. Two more serious offences are established on top of that, both reflecting the offender’s higher culpability and carrying greater penalties: namely (a) intentionally causing alarm, distress, or humiliation to the person in the image; and (b) seeking sexual gratification from the act (these are outlined in sections 66B(2) and (3) of the Sexual Offences Act 2003). Threatening to share an intimate image of a person has also been made an offence where the perpetrator either intends to cause fear that the threat will be carried out or acts recklessly in doing so (this is found under section 66B(4) of the aforementioned 2003 Act). The new offences also fall under the sexual offender notification requirements. These new intimate image offences are also designed to tackle “deepfakes” and “down-blousing” (i.e. capturing images typically of a person’s chest area, from a downward angle, often without their knowledge or consent). They also come with various exemptions (outlined under section 66C of the Sexual Offences Act 2003), e.g. where the photograph or film involves a child and is of a kind normally shared among family and friends.

While there is some overlap between existing offences, the new offences consolidate previous ones or address gaps. For example, the intimate image sharing offence widens the meaning of the photographs or films, from “private sexual” to “intimate” and makes it easier for those caught sharing such content online without the other person’s consent to be prosecuted, as it removes the requirement for any harm to be intended to the subject of the photograph or film. The updated guidance of the Crown Prosecution Service aims to delineate the appropriate charge for each circumstance. The introduction of the new offences is anticipated to fortify protections against online misconduct.


This article was first published on the IRIS Merlin database and is reproduced here with permission and thanks.

Alcohol labelling and warnings: how progress at the Codex Alimentarius Commission can help States overcome challenges at the World Trade Organization

By Nikhil Gokani, Lecturer in Law, Essex Law School, University of Essex

In this post, Nikhil Gokani writes about the work he is involved in on developing international standards, which can help countries navigate challenges under the rules of the World Trade Organization. Nikhil works on food and alcohol labelling regulation in the UK, EU and globally. He is chair of the Alcohol Labelling and Health Warning International Expert Group at the European Alcohol Policy Alliance (Eurocare). He is also a member of the Technical Advisory Group on Alcohol Labelling at WHO.

Alcohol-related harm and consumer protection

Consuming alcohol is a causal factor in more than 200 diseases, injuries and other health conditions. Alcohol consumption affects other people, such as family, friends, colleagues and strangers. Globally, about 3 million deaths each year result from the use of alcohol. Beyond health, there are significant social and economic burdens.

Consumers do not have sufficient knowledge about the content and effects of alcoholic beverages. Most consumers are unaware of the energy and nutrition values (such as amount of carbohydrates) and ingredients. Few consumers are aware of the health risks, such as alcohol causing at least seven cancers.

Alcohol labelling and global progress

Alcohol labelling is an important source of information for consumers. Labelling is unique in providing information at both the point of purchase and consumption. Labelling improves knowledge. It is an effective measure to help ensure consumers are well-informed and not misled. Increasing evidence also shows that health information can empower consumers to make healthier consumption decisions, including drinking less.

Unfortunately, few countries in the world require that consumers are given essential facts on labelling, such as ingredients lists and nutrition declarations. Even fewer countries require beverages to be labelled with information warning consumers about the hazards of drinking alcohol.

The most recent success was in Ireland where new rules will require alcohol packaging to display warnings that “Drinking alcohol causes liver disease”, “There is a direct link between alcohol and fatal cancers” and a pictogram showing that alcohol can harm the unborn child if drunk during pregnancy. Countries like Ireland, unfortunately, face international legal challenges, particularly under international trade law.

International trade law and international standards

International trade law can constrain the regulatory autonomy of States. Significant to alcohol labelling is the World Trade Organization (WTO) Agreement on Technical Barriers to Trade (TBT Agreement). Most significantly, Article 2.2 of the TBT Agreement states that technical regulations, including rules on alcohol labelling, shall not create “unnecessary obstacles to international trade”. Technical regulations shall not be “more trade-restrictive than necessary to fulfil a legitimate objective”. Preventing alcohol-related harm is indeed a legitimate objective. However, many States trying to introduce better alcohol labelling rules have been challenged because other States have argued that labelling rules go beyond what is more trade-restrictive than “necessary”.

When a WTO State’s rule about alcohol labelling is challenged, international standards can either help or hinder them.

On the one hand, Article 2.4 of the TBT Agreement states that where “relevant international standards exist” States “shall use them…as a basis for their technical regulations” except when this would be ineffective or inappropriate Therefore, where international standards are not aligned with public health interests, they can make it harder for States to introduce effective national rules.

On the one hand, Article 2.5 of the TBT Agreement provides a powerful defence mechanism. It states that, when a technical regulation is “in accordance with relevant international standards”, there is a rebuttable presumption that the national rule does not create an unnecessary obstacle to international trade. Simply stated, where the State complies with a relevant international standard, they have a potentially strong defence for their labelling rules. Therefore, good international standards can be very powerful to help countries defend their national labelling policies.

Codex Alimentarius

An international standard is one which is made by a recognised body and compliance is voluntary. For alcohol labelling, there is indeed an international standard: the Codex Alimentarius is a collection of standards, guidelines and codes adopted by the Codex Alimentarius Commission.

Where alcohol labelling is in compliance with relevant Codex standards, States could use this as a defence under WTO rules. This underlines the importance of having good Codex standards that support effective national rules on alcohol labelling.

Significant progress has been made at the Codex Alimentarius Commission. Alcohol labelling was discussed at four Sessions of the Codex Committee on Food Labelling (CCFL). The Report of the 46th Session of CCFL noted “there was common ground on which to proceed with the work” but little further progress has since been made in recent years. At that Session, the Committee agreed that Russia, European Union and India with assistance from WHO and Eurocare would prepare a discussion paper for consideration at the next meeting. In fact, this was the first time this Committee included an NGO in the preparation of a discussion paper, which is a testament to the global leadership by Eurocare in this field.  Unfortunately, however, no discussion paper was submitted by Russia. Therefore, WHO and Eurocare each submitted their own discussion paper to keep the matter moving forward. The WHO representative spoke objectively and convincingly at the 47th meeting of CCFL. These efforts led to alcohol labelling remaining on the Codex agenda – something which several States, no doubt under the influence of the powerful alcohol industry, had resisted.

The Codex Alimentarius Commission has now started a new consultation process. It issued a Circular Letter which asks State members and Observers to comment on how work on developing alcohol standards should proceed.

For this consultation process to work best for public health and consumer protection, we need everyone to contact their governments (emails here) to demand effective progress at Codex. Please join us in these efforts!

Navigating freezones in the influencerdom: a shadowlands guide

Photo by Ronald Cuyan on Unsplash

By Dr. Alexandros Antoniou, Essex Law School

Influencer marketing has emerged as a formidable force in the realm of advertising, wielding substantial power to sway consumer behaviour and shape brand perceptions. Leveraging the credibility and reach of social media personalities, brands can effectively tap into niche audiences and foster authentic connections.

Despite its undeniable impact, there remains a notable lack of comprehensive research and regulatory oversight surrounding influencer marketing practices. As the landscape continues to evolve rapidly, it becomes increasingly imperative for regulators to delve deeper into this field in order to safeguard followers’ interests and maintain the integrity of digital advertising ecosystems in which influencers operate.

My new research looks at the rapidly evolving landscape of influencer marketing and its profound effects on the dynamics between social media users, advertisers, and brands. In my new article, I demonstrate that influencers have transcended the dichotomy of self-publishers vs traditional advertisers, shaping distinct career trajectories.

With the burgeoning influencer industry in mind, I critically examine the regulatory landscape, particularly the responsiveness of the Advertising Standards Authority (ASA) and the Competition and Markets Authority (CMA) to influencers’ professionalisation.

Despite the industry’s growth, regulatory gaps persist, leaving consumers vulnerable to lightly-overseen influencers. I caution that regulators rely on antiquated tools, allowing newcomers in the industry to fly beneath their radar.

For instance, the established advertising rule to make clear that ads are ads predominantly applies to those influencers who have forged brand partnerships. However, I argue that early career influencers who may not monetise their content, still wield significant influence. They have a remarkable knack of cultivating genuine connections that bestow hidden promotional content with an unmatched aura of trustworthiness.

I conclude that, from a regulatory standpoint, we are not seeing influencers’ increasing professionalisation. I advocate for a transformative shift in regulatory perspective to encompass influencers throughout their career journey, challenging the prevailing notion that only high-reach influencers warrant scrutiny.

Therefore, I emphasise the need for a recalibrated regulatory threshold that accounts for emerging influencers, endorsing a more comprehensive definition and a holistic approach that recognises the multifaceted nature of IM practices.

My article, published in the Journal of Computer, Media and Telecommunications Law (Vol. 29, Issue 1, pp. 8-21) urges regulators to adapt to the nuanced and evolving nature of influencer marketing to ensure a more robust oversight and integrity in this emerging profession.

Essex Law School Expert Praised in House of Lords for Work on Online Safety Legislation

Photo by Marcin Nowak on Unsplash

Essex legal expert Lorna Woods has earned special recognition in the House of Lords thanks to her research and work supporting the landmark Online Safety Bill. The Bill successfully passed through Parliament and is now enshrined into law, having received Royal Assent on Wednesday 26 October 2023. The Act makes social media companies keep the internet safe for children and give adults more choice over what they see online.

Professor Woods has helped influence the bill after famously writing some of its founding principles on the back of a sandwich packet with the help of William Perrin, of the charity Carnegie UK, several years ago.

Professor Woods has continued to work with Carnegie throughout the last few years and provided expert advice to backbenchers and members of the House of Lords.

She was personally thanked following the final debate in the Lords by Lord Stevenson for her work on the bill.

Lord Clement-Jones added: “I pay my own tribute to Carnegie UK, especially Will Perrin, Maeve Walsh and Professor Lorna Woods, for having the vision five years ago as to what was possible around the construction of a duty of care and for being by our side throughout the creation of this bill.”

Professor Woods has become a high-profile commentator on the bill throughout its passage on Parliament, and recently recounted the “surreal moment “it was approved by the Lords in an interview with the BBC Online.

In a separate interview with Wired, Professor Woods responded to criticisms of the bill by insisting it would help protect the human rights of children being exploited and abused online.

She was also quoted in the New York Times’ own coverage of the Bill, and has also appeared on BBC Radio Five Live.

Professor Woods said: “The Bill is significant as it marks a move from self-regulation – where service providers decide what is safe design and whether to enforce their community standards – to regulation under which services are accountable for those choices.”


This story was first published on the University of Essex’s news webpages and is reproduced on the ELR Blog with permission and thanks. The story was edited to reflect the fact that the Bill received Royal Assent.

Front-of-Pack Nutrition Labelling: Time for the EU to Adopt a Harmonized Scheme 

By Dr Nikhil Gokani (Essex Law School) and Prof Amandine Garde (School of Law and Social Justice, University of Liverpool)

Nutri-Score label as published by Santé Publique France

In its Farm to Fork Strategy, published nearly 3 years ago in May 2020, the European Commission committed to ‘propose harmonised mandatory front-of-pack nutrition labelling’ (‘FoPNL’) to ‘empower consumers to make informed, healthy and sustainable food choices’ by the fourth quarter of 2022. This commitment was repeated in Europe’s Beating Cancer Plan in February 2021. The deadline has now passed and the promised proposals do not seem forthcoming. This is all the more disappointing considering there is strong support for the implementation of an EU-wide harmonized FoPNL scheme, as demonstrated by the results of the EU consultation on ‘Food labelling—revision of rules on information provided to consumers’ published in December 2021.

Such support is not surprising considering the significant advantages that the adoption of a harmonised FoPNL scheme has for consumers, traders, Member States and the EU alike.

  • From the perspective of consumers, an effectively designed FoPNL scheme helps inform them of the nutritional composition of food. Informing consumers lies at the heart of the EU’s consumer protection strategies and reflects its long-held view that regulating food labelling empowers consumers to make healthier choices whilst promoting the objectives of market integration. At present, the EU only mandates a small table of nutrition information on the back of food packaging. This is often hard to see and difficult to understand, whereas effectively designed FoPNL can provide easy-to-see and easy-to-understand information on the front of food packaging thus supporting healthier food choices.
  • From the perspective of traders, harmonized FoPNL will create a level playing field by reducing regulatory fragmentation, which will also increase legal certainty and lower labelling costs. There are currently 7 national FoPNL schemes recommended across 15 EU Member States. Further industry-led schemes are used, although they have not been officially endorsed by any Member State. While some manufacturers have adopted FoPNL, many have not, and others are using multiple different schemes.
  • From the perspective of Member States, a mandatory, EU-wide FoPNL scheme will contribute to improving diets and health outcomes. Current EU rules prohibit the adoption of FoPNL schemes which are interpretive, and do not facilitate the adoption of FoPNL schemes which are easy to use. They also prevent Member States from making FoPNL mandatory.
  • From the perspective of the EU itself, a harmonized FoPNL scheme will promote the proper functioning of the internal market in line with the EU’s mandate to ensure a high level of health and consumer protection in all its policies. Moreover, it will facilitate the compliance of all its Member States with the commitments that they have made at international level to promote healthier food environments.

The choice of any single scheme must be guided by evidence. The Commission’s Joint Research Centre reviews, published in 2020 and 2022, identify what makes FoPNL effective:

  • colour-coded labels draw consumer attention through increased salience, are preferred by consumers, are associated with increased understanding and encourage healthier food purchases;
  • simple labels require less attention to process and are preferred and more easily understood by consumers; and
  • consumers prefer and better understand consistent and simple reference quantities.

In its Inception Impact Assessment of December 2020, the Commission put forward four types of labels as contenders for a harmonized, mandatory EU-wide scheme: graded indicators (e.g. Nutri-Score); endorsement logos (e.g. Keyhole); colour-coded (e.g. Multiple Traffic Lights); and numerical (e.g. NutrInform). It is clear that of the four schemes considered in the Inception Impact Assessment, Nutri-Score is the only one meeting the criteria above, and its effectiveness is strongly established. Not only does it attract consumers’ attention, it is favourably perceived and well understood. It also has a positive impact on the nutritional quality of purchases. Additionally, the nutrient profiling model underpinning Nutri-Score has been extensively validated and shown to be associated with improved health outcomes. Even if no scheme will ever be described as ‘perfect’ by all stakeholders, its developed evidence base and adoption by a growing number of Member States, makes Nutri-Score the only viable option for the timely implementation of a mandatory, harmonised FoPNL scheme in the EU.

Growing rates of obesity and other diet-related diseases increase the urgency for the EU to act. We, therefore, call on the Commission to propose legislation requiring food to be labelled with Nutri-Score on a mandatory basis across the EU, as it has committed to do.


This post was originally published as an invited editorial in the European Journal of Public Health in June 2023. It is available here.

Nikhil Gokani is an expert in the regulation of front-of-pack nutrition labelling in the EU and globally. Please click here for his profile and contact details.

The Online Safety Bill: Where Are We Now and Will It Succeed?

Image via Shutterstock

The House of Lords is currently debating at Committee Stage the Online Safety Bill, a landmark piece of legislation which introduces a new set of internet laws to protect children and adults from online harms.

The Bill will establish a regulatory framework for certain online services. These include user-to-user services, such as Instagram, Twitter and Facebook, and search services, such as Google.

The UK government’s stated aim in introducing the Bill is “to make Britain the best place in the world to set up and run a digital business, while simultaneously ensuring that Britain is the safest place in the world to be online”.

The BIll will place duties of care on both regulated user-to-user service providers and regulated search service providers. The regulated service providers would have duties relating to, among other things: (a) illegal content; (b) protecting children; (c) user empowerment; (d) content of democratic importance, news publisher content and journalistic content; (e) freedom of expression and privacy; and (f) fraudulent advertising.

The Bill also does two other distinct but interconnected things. It introduces age-verification requirements in relation to pornography providers (which are not user-to-user); as well as new criminal offences, e.g., encouraging self-harm and epilepsy trolling.

This makes it a long, wide-ranging and complex Bill.

Moreover, the Bill will place more responsibility on technology giants to keep their users safe. It will give Ofcom, the UK’s communications regulator, the power to levy fines against non-compliant providers, and would make senior managers liable to imprisonment for not complying with a direction to provide Ofcom with information.

But what impact is the BIll expected to have? What concerns are there about the implementation of this new regime?

Prof. Lorna Woods (Professor of Internet Law, University of Essex), who devised the systems-based approach to online regulation that has been adopted by the Government and whose work is widely regarded as laying the groundwork for the UK’s Online Safety Bill, was recently interviewed on this new regulatory approach.

Photo by Austin Distel via Unsplash

On 11 May 2023, Prof. Woods stepped inside BBC Radio 4’s Briefing Room to be interviewed by David Aaronovitch. She talked about what is actually in the Bill, how the new internet laws are intended to work and what potential weaknesses still remain. The programme can be accessed here.

Prof. Woods also joined Conan D’Arcy of the Global Counsel tech policy team to talk about the UK tech regulation, discuss recent criticisms of the Online Safety Bill, as well as the regulation of generative AI tools like ChatGPT. You can listen to the podcast here (published on 17 May 2023).

New Standards Code launched by press regulator IMPRESS

Photo by Jon Tyson on Unsplash

By Alexandros Antoniou, Essex Law School

On 16 February 2023, the press regulator IMPRESS launched its new Standards Code, with key changes including guidance on AI and emerging technologies, stricter measures on tackling misinformation, stronger safeguarding guidelines, and a lower discrimination threshold.

Background

IMPRESS is the only British press regulator to have sought formal approval from the Press Recognition Panel (PRP). The Panel was established in the aftermath of the phone-hacking scandal to ensure that any future press regulator meets certain standards in compliance with the Leveson report recommendations. IMPRESS is distinct from the Independent Press Standards Organisation (IPSO), Britain’s other press regulator which enforces the Editors’ Code of Practice but does not comply with the majority of the Leveson report’s independence requirements. IPSO regulates some of the more established UK press (e.g., the Mail newspapers, the News UK titles and their respective websites), whereas publishers regulated by IMPRESS tend to be newer and more digitally focused (e.g., Bellingcat, Gal-dem and The Canary). IMPRESS is viewed by some media campaigners (e.g., Hacked Off) as “the most popular” complaints-handling body in the country. Its membership has risen from just 26 publishers in 2017 to 113 today.

The IMPRESS Code was first published in 2017 with the aim of guiding media professionals and protecting the public from unethical news-gathering activity. It applies to all forms of news delivery, including print publications, news websites and social media, and to any individual or organisation gathering information and publishing news-related content. As the media landscape has rapidly evolved in the last few years, changes were introduced in February 2023 to help build trust and improve accountability in the industry, while covering a more diverse range of digital news creators (including publishers, editors, journalists, citizen journalists, reporters, bloggers, photojournalists, freelancers, and content creators) and their practices.

Some key changes

A major change concerned the issue of inaccurate content and was propelled by the challenges faced in distinguishing true information from misinformation and disinformation, including that generated by AI. To help journalists and publishers ensure that their material is supported by verifiable and legitimate sources, the Code and its associated Guidance on Clause 1 (Accuracy) and Clause 10 (Transparency) provide advice on fact checking and source verification, particularly within an online context. Specifically, the Code now requires publishers to exercise human editorial oversight to ensure the accuracy of any AI generated content, clearly label such content, and take reasonable steps to limit the potential spread of false information (deliberately or accidentally) by verifying the story with other sources and checking the information against other reliable sources.

Changes were also introduced in relation to the coverage of news stories involving children. They all acknowledge children’s media literacy, autonomy, and protections that are necessary to develop them as people. The revised Code defines a child as anyone under the age of 18 and places an obligation on publishers to “reasonably consider” requests from children to remain anonymous during news-gathering and publication (Clause 3.3), as well as requests from those under 18 when the article was published to anonymise that news content in the present day (Clause 3.4). This is a welcome recognition of the proposition that individuals should not be adversely affected later in life because stories that concern them as children remain widely available online. Importantly, under the new Code, an appropriate adult cannot veto a child’s refusal or revocation of consent (paragraph 3.1.2 of the Guidance to the Code).

Because of the internet and social media, publishers must also take extra care not to identify children indirectly through “jig-saw identification”, i.e., the ability to work out someone’s identity by piecing together different bits of information supplied by several features of the story or across articles or news outlets (the same can apply to adults, e.g., in cases where victims of sexual offences enjoy anonymity by law). The Code (Clause 3.2) requires publishers to consider using techniques or practices that remove identifying data (e.g., the area of a city where they live, their parents’ occupations or other unusual details that could lead to a child’s identification). This practice also helps publishers comply with minimum use requirements under data protection law.

Another significant change concerns the provisions on discrimination under Clause 4. The previous version of the Code stated that publishers would be found in breach if they incited hatred “against any group … [on any] characteristic that makes that group vulnerable to discrimination”. This reflected the legal standard under UK law, but it was not adequately enforced, particularly online. The revised Code holds publishers to stricter standards. Clause 4.3 reads: “Publishers must not encourage hatred or abuse against any group” based on those characteristics (emphasis added). The new wording lowers the threshold for what IMPRESS regards as discriminatory coverage and takes into account its potential effect not just on the communities, but on the society as a whole. This change, according to IMPRESS’ Deputy Chief Executive Lexie Kirkconnell-Kawana: “accounts for prejudice that could be more insidious and be more cumulative or more thematic, and not a direct call to action or violence against a group of people – because that’s an incredibly high threshold, and it’s not often how news is carried. You don’t see headlines saying […] ‘Take up arms against x group’.”

Clause 7 on privacy highlights that, when determining the privacy status of the information, publishers must give “due consideration to online privacy settings” (Clause 7.2(b)). Public interest justifications may, however, apply. The provision challenges the widely held misconception that information found or posted online is automatically made public or free to use. The Guidance to the Code acknowledges that an individual’s expectation of privacy may be weaker where no privacy settings are in place but clarifies that the absence of privacy settings will not necessarily prevent a breach of this Clause. It does not automatically mean that an individual consents to publishers or journalists publishing their content, which may reach an entirely different – or even wider – audience than the audience usually viewing the content on that individual’s account (paragraphs 7.1.4 and 7.2.6 of the Guidance to the Code).

Editorial responsibility and accountability with an outlook to the future

The new Code is the outcome of an intensive two-year review process, which involved consultation with academics, journalists, members of the public and industry stakeholders. Richard Ayre, Chair of IMPRESS, stated: “With more news, more sources, more publishers, more opinions than ever before, the opportunities for journalism are limitless. But nothing’s easier for a journalist to lose than public trust. This new Code sets the highest ethical standards for IMPRESS publishers, large and small, and whatever their point of view, so the public can confidently engage with the news of today, and tomorrow.”


This article was first published on the IRIS Merlin legal database. The original piece can be viewed here.

OFCOM Reports on Its First Year of Video-Sharing Platform Regulation

By Dr. Alexandros Antoniou, Lecturer in Media Law, University of Essex

Ofcom, the UK’s communications regulator, published its first report on video-sharing platforms (VSPs) since becoming the statutory regulator for such platforms established in the UK. This is the first of its kind under the VSP regime and reveals information previously unpublished by in-scope regulated companies.

Platforms’ compliance with the new VSP regime

Ofcom’s report outlines the regulator’s key outcomes from the first year of regulation (October 2021 to October 2022). Its findings stem from the use of the regulator’s statutory powers under section 368Z10(3) of the Communications Act 2003 to issue enforceable information requests to all notified VSPs.

Specifically, some platforms made positive changes to their systems and processes in light of new VSP requirements, e.g., TikTok’s dedicated online safety committee to provide oversight of content and safety compliance, Snapchat’s parental control feature, and OnlyFans’ age assurance tools for all new UK subscribers. However, Ofcom found that platforms provided limited evidence of how well their user safety measures operate, blurring as a result their effectiveness and consistency. It also emerged that some platforms are not adequately resourced, equipped and prepared for regulation. There is a clear need for some of them to improve the quality of their responses to the regulator’s information requests. Moreover, Ofcom found that risk assessment processes were not prioritised by platforms, despite their importance in proactively identifying and mitigating safety risks. Risk assessments, however, will be a requirement on all regulated services under future online safety laws that will eventually supersede the VSP regime. Finally, some adult VSPs’ access control measures were not found to be sufficiently robust in preventing children from accessing pornographic content.

Moving towards the second year of the implementation of the regime, Ofcom will dedicate most of its attention to the comprehensiveness of user policies (also known as Community Guidelines), including their application and enforcement; the availability of appropriate tools empowering users to tailor their online experience; and the implementation of suitable age verification (AV) mechanisms to protect children from harmful online content, including pornography.

To increase transparency of platform processes and raise awareness of how VSPs protect against harmful content, Ofcom’s report also sets out the measures adopted by some platforms to protect their users. The following platforms were reviewed in particular: TikTok, Snapchat, Twitch, Vimeo, BitChute, and some smaller VSPs including Fruitlab, ReCast Sport and Thomas Cook, as well as smaller adults VSPs like AdmireMe, FanzWorld and Xpanded. The report explains the governance processes within each regulated service (giving detail on their systems for online safety risk management) and the journey followed by users/ subscribers on each of these platforms.

Additional sets of research

Ofcom also made available a report on the VSP Landscape in the UK, describing the context in which providers apply protection measures. The report offers insights into: (a) who the notified VSP providers are; (b) how many users of VSPs there are in the UK and their demographics; (c) what the main business models used by VSP providers are; and (d) what information VSP providers make publicly available in their transparency reports.

With the aim of building its evidence base around the appropriateness of certain protection measures, Ofcom commissioned further sets of research to understand people’s experiences of using (and attitudes towards) safety measures on VSPs. The research explored a range of users’ perspectives, from parents (or carers) of children aged 6-17 to users of porn platforms.

More specifically, the VSP Parental Guidance Research looked at parents’ attitudes towards children’s online behaviours. In summary, it found that parents tended to perceive VSPs generally as having a constant and unregulated stream of content. Based on their current understanding and the information available to them, six in ten parents said they did not use parental controls on the VSPs that their child uses, because their child “did not need them”. Just over half of parents remembered seeing or receiving guidance on how to keep their child safe online from multiple sources (government websites being the most trusted). However, the study revealed that the process of finding information on online safety was described by many parents as overwhelming and often only prompted by a specific incident (e.g., school guidance, discovering their child was looking at inappropriate content). Parents were also appreciative of safety guidance from VSPs that was clear, digestible, accessible, and easy to understand.

An additional set of research, i.e., Adult Users’ Attitudes to Age-Verification (AV) on Adult Sites, found that, although there was broad support from adult participants for age assurance measures to prevent under-18s from accessing online pornography, UK adult sites were not doing enough to protect children. The biggest adult video-sharing site, OnlyFans, introduced new age verification in response to regulation (using third-party tools) but smaller sites based in the UK did not have sufficiently robust access control measures. Subscriber sign-on processes show that smaller UK-established adult VSPs have AV measures in place when users sign up to post content, but users can generally access adult content simply by self-declaring that they are over 18. Ofcom’s research showed that 81% of participants accepted AV measures where these were expected in general (e.g., whilst purchasing alcohol online or participating in online gambling). A similar proportion (80%) felt Internet users should be required to verify their age when accessing pornography online, especially on dedicated adult sites. The use of a credit card was the preferred means of AV for paid access to pornography. Serious concerns were expressed by participants about how user data might be processed and stored during AV processes to access pornography, reflecting a very low level of trust in the data privacy practices of adult sites.

These findings will inform Ofcom’s regulation of VSPs, including the rules on the protection of children, and its engagement with notified providers.


This article was first published on the IRIS Merlin legal database. The original pieces can be viewed here.

Ofcom clears ITV for Piers Morgan’s controversial comments about Meghan Markle

Prince Harry and Meghan Markle going to church at Sandringham on Christmas Day 2017 | Source: Wikimedia Commons

Dr. Alexandros Antoniou, School of Law, University of Essex

On 1 September 2021, Ofcom, the UK’s communications regulator, rejected a record of complaints about Piers Morgan’s comments on Good Morning Britain in the wake of the Duke and Duchess of Sussex’s interview with Oprah Winfrey.

Good Morning Britain (GMB) is a weekday morning news and discussion programme broadcast on ITV. On 8 March 2021, GMB was dominated by the interview between Oprah Winfrey and the Duke and Duchess of Sussex which had been broadcast overnight in the USA. Excerpts from the interview had been made publicly available ahead of its full broadcast in the UK that evening. The programme included a report on how the US was reacting to the interview and focused on two parts which revealed that the Duchess had contemplated suicide and that an unnamed member of the Royal Family had raised concerns about “how dark” her son’s skin colour might be.

The following day, the lead presenter Piers Morgan made it very clear during the show that he did not believe a word of what Megan Markle had said, adding that if she read him a weather report, he wouldn’t believe it. Mr. Morgan stormed off the GMB set after clashing with weather presenter Alex Beresford over his controversial remarks. By the end of the day, the mental health charity Mind had released a statement showing their deep concern over the statements aired in the show. This was rather awkward for ITV because of their 2021 Get Britain Talking mental wellness campaign, in which Mind is a partner. A strong public reaction ensued. Ofcom received more than 57,000 complaints about Mr. Morgan’s comments on GMB, making it the most complained about TV show in Ofcom’s history. The same evening, ITV announced that the GMB host resigned from his role on the show after six (often confrontational) years.

The complaints received by the regulator can be grouped under two main categories. The first category related to concerns about Morgan’s statements on the Duchess of Sussex’s revelations about her mental health and suicidal feelings. The second category related to concerns about the presenter’s dispute of the Duchess’ personal account of her experiences of racism within the Royal Family during her time as a senior royal. The programme in question raised issues under Section Two of the regulator’s Broadcasting Code which outlines standards for broadcast content in respect of harm and offence.

In particular, the rules engaged were Rule 2.1 which provides that “generally accepted standards must be applied to the content of television and radio services […] so as to provide adequate protection for members of the public from the inclusion in such services of harmful and/or offensive material” and Rule 2.3 which requires that broadcasters must ensure that potentially offensive material is justified by the context. Under the latter, racist terms and material should be avoided unless their inclusion can be justified by the editorial content of the programme.

As far as the discussion of mental health and suicide in the programme is concerned, Ofcom held in a 97-page-long ruling that Piers Morgan was entitled to hold and express strong views that scrutinised the veracity, timing and possible motivations behind the allegations made by the Duke and Duchess of Sussex. Their interview was a major international news story that was a legitimate subject for debate in the public interest. Restricting such views would be “an unwarranted and chilling restriction” to the broadcasters’ right to freedom of expression and the audience’s right to receive information and ideas without undue interference (Article 10 of the ECHR). However, while the Broadcasting Code does not seek to curb broadcasters’ right to include contentious viewpoints, compliance with the Code’s rules must be ensured.

The regulator expressly acknowledged that Piers Morgan’s statements of disbelief of Meghan Markle’s suicidal thoughts had the potential to cause harm and offence to viewers. Without adequate protection by broadcasters, audience members (some of whom were likely to place weight on the presenter’s opinions) may have been discouraged from seeking mental health support for fear of facing a similar reaction. As the Chief Executive of Mind explained in the charity’s statement: “[…] when celebrities and high-profile individuals speak publicly about their own mental health problems, it can help inspire others to do the same. Sharing personal experiences of poor mental health can be overwhelming, so it’s important that when people do open up about their mental health they are met with understanding and support.”

Ofcom underlined their concerns about Mr. Morgan’s apparent disregard for the seriousness of anyone expressing suicidal thoughts, but nevertheless took the view that the robust and direct challenge to his comments from other programme contributors provided important context for viewers throughout the programme. “Overall, adequate protection for viewers was provided and the potentially harmful and highly offensive material was sufficiently contextualised,” Ofcom concluded. Thus, on balance, the programme was not found in breach of Rules 2.1 and 2.3 in respect of the discussion on mental health and suicide. Although the regulator ruled in Mr. Morgan’s favour, it reminded ITV to be more cautious when discussing sensitive issues around mental health, e.g., through the use of timely warnings or signposting of support services.

A similar reasoning was followed in relation to the second category of complaints about race. Ofcom considered that the conversations in the programme provided an open and frank debate on the nature and impact of racism, about which there is a high public interest value. Given the seriousness of the allegations made in the interview to Oprah Winfrey, it was legitimate to discuss and scrutinise these claims. The programme included, however, several contributors who could speak “decisively and with authority” on racial issues, meaning that a range of views was represented, and Mr. Morgan’s comments were directly challenged on several occasions. Despite the strong opinions expressed in the programme, which could be highly offensive to some viewers, any potential offence was justified, according to the regulator’s view, by the broader context; hence, the comments were not found to be in breach of Rule 2.3 of the Code.

Speaking at a Royal Television Society conference in September 2021, the Chief Executive of Ofcom Dame Melanie Dawes defended the regulator’s ruling as a “quite a finely balanced decision” but “pretty critical” of Piers Morgan. However, BBC presenter Clive Myrie, who interviewed Dame Dawes at the event, told her: “The media forums that I’m on, which include a lot of black broadcasters and producers and people in the industry, were very upset at the Ofcom ruling concerning Piers Morgan, which was about his comments and views on mental health issues, but that race element is there. And their sense is that it [Ofcom] is too white an organisation and would never understand why that ruling was so upsetting to so many people.”

Piers Morgan was recently nominated for best TV presenter at the 2021 National Television Awards. On 15 September 2021, it was reported that he would be joining a Rupert Murdoch-owned network as a host of a new show that is planned to air in the US, UK and Australia.


This piece was first published on the IRIS Merlin legal database and is reproduced on our blog with permission and thanks. The original article can be accessed here.