Navigating Trade Mark Protection in the Digital Word

Photo by Kristian Egelund on Unsplash

By Laia Montserrat Chávez González and Renata Alejandra Medina Sánchez

The digital age has given businesses the opportunity to expand and establish their trade marks worldwide. As a result, they have been able to make themselves known to more consumers and increase their profits. A well-known trade mark has the power to create great economic value for businesses and strengthen a company’s branding in the marketplace.

With WhatsApp, Facebook, and Instagram dominating the social media landscape in the United Kingdom, brands are eager to bolster their presence on these platforms to effectively promote their products or services. However, the advent of e-commerce and social media platforms and the emergence of new technologies have created novel challenges for trade mark protection.

One of these challenges is trade mark infringement, which takes place when someone violates the exclusive rights attached to a registered trade mark without the owner’s authorisation or licence. In this evolving digital landscape, businesses must swiftly adapt and deploy suitable protection strategies to safeguard their trade mark rights online.

This blog post aims to elucidate trade mark infringement on social media for entrepreneurs, while outlining available protective measures within these platforms.

The importance of trade marks in the online business landscape

Social media platforms such as Facebook offer a highly promising opportunity for businesses to communicate with their audience in a fast and direct manner. Several studies have highlighted that these platforms are efficient channels to manage communication with customers and reach a broader audience, not only as a publicity channel but also as a way to attract new clients and recruit employees. In this sense, social media is a valuable tool for preserving and boosting brand reputation at minimal cost.

Brands build relationships with their customer base through offline methods such as offering personalised services, hosting events, and running loyalty programmes, as well as online methods including engaging on social media, utilising email marketing, fostering online communities, collecting, and acting on feedback, and providing responsive customer support. The real-time and multi-directional nature of social media facilitates communication and content usage, challenging offline communication models, such as radio and television advertisements. Social media now allows consumers to actively participate by sharing opinions and information, influencing brand perception.

To distinguish themselves and ensure their unique identity is recognised and protected across all platforms, brands rely on trade marks. Trade marks have the same role in the online business realm as they do in traditional markets: to differentiate goods and services. Therefore, their legal protection is crucial, not just to prevent consumer confusion but also to safeguard a business’ reputation.

In 2009, L’Oréal SA, a multinational cosmetics and beauty company, and other luxury brands filed a lawsuit against eBay for allowing the sale of counterfeit products bearing their trade marks on its platform. This example highlights the challenges that e-commerce platforms face in monitoring and preventing trade mark infringement by third-party sellers and emphasises the need for robust monitoring systems and cooperation between online platforms and brand owners to maintain consumer trust.

Understanding trade marks: some fundamental concepts

Entrepreneurs must grasp fundamental concepts associated with trade marks and their use in order to protect their brands from infringement on social media and maintain brand integrity in a competitive online marketplace. This section briefly outlines the importance and relevance of these concepts for trade marks.

Trade mark: A trade mark is any sign capable of being represented in a manner which enables the registrar and other competent authorities and the public to determine the clear and precise subject matter of the protection afforded to the proprietor, and of distinguishing goods or services of one undertaking from those of other undertakings. It may consist of words (including personal names), designs, letters, numerals or the shape of goods or their packaging.


Domain Names: Trade marks and domain names intersect in protecting brand identity online; while a trade mark provides rights to a brand name, securing a corresponding domain name, namely the unique address that people use to find the website on the internet, helps ensure exclusive use and prevents ‘cybersquatting’, i.e., the unauthorised registration of domain names resembling trade marked names.


Usernames: Trade marks extend to social media usernames, as usernames often serve as digital representations of trade marks. A username on social media can enhance brand recognition but might not guarantee trade mark protection.


Terms and conditions: They are legal agreements between a service provider and its client, which set the obligations and responsibilities of the parties. Platforms’ terms and conditions may restrict or regulate trade mark use to prevent confusion or misuse, ensuring brand integrity and user trust.


Branding: By integrating various elements such as logo, design, and mission, branding refers to the process of developing a positive image of a company or its products in customers’ minds. This is achieved by ensuring a consistent theme across all marketing channels, including social media, maintaining consistency in trade mark use, and adhering to trade mark laws.

Trade mark infringement

Infringement occurs when someone uses a trade mark that is substantially similar to a registered trade mark owned by another person, for products or services that are the same or similar to those covered by the registered trade mark. The following represent some common ways in which infringement can occur on social media:

Jacked usernames:

This refers to social media or online account usernames that are ‘hijacked’ by someone other than the trade mark owner, often to exploit a well-known trade mark’s reputation or value. This unauthorised use can mislead consumers and harm a brand. For instance, this could happen if someone other than Nike Inc. registers the username ‘nikeofficial’ on a social media platform. This unauthorised use could confuse consumers into thinking that the account is the official representation of the Nike brand, potentially infringing on Nike’s trade mark rights.

Hashtag hijacking:

Using a trade marked name or slogan as a hashtag without permission can be particularly problematic, especially if the hashtag is used in a way that could confuse consumers or dilute the brand’s identity. Using trade marked terms in hashtags accompanying social media posts should be avoided, unless the explicit permission from the trade mark owner has been obtained.

As a general rule, using a protected trade mark in a hashtag can risk infringement if it implies sponsorship, association, or endorsement by the trade mark owner. However, if the hashtag simply promotes the user’s own goods or services, indicating compatibility or common origin, it may be considered permissible.

Advertising and trade mark use:

Incorporating trade marks into social media advertising initiatives is very important for businesses. However, according to consumer law and domestic legislation applicable in each case, trade mark owners must ensure that advertisements adhere to trade mark laws, steering clear of any practices that could be deemed misleading or deceptive.

Claims made in social media ads must be substantiated, and the use of trade marks must not create a false impression about the product or service. Misleading use of trade marks can result in regulatory action, fines, and damage a business’ reputation. Moreover, social media ads employing registered trade marks must not suggest affiliations or product characteristics that are not true. For instance, implying that a product has certain qualities or is endorsed by a trade mark owner when it is not can be considered deceptive.

Misleading influencer partnerships:

Content creators, such as popular bloggers, online streamers, celebrities, social media personalities, have the power to influence customer’s buying behaviour. Including influencers in promoting products or services might seem like a good idea for businesses looking to expand their audience. Nevertheless, these collaborations should not involve making false claims about the benefits and effectiveness of what is being promoted.

In addition to the potential for misleading claims, influencers’ promotional content on social media has the potential to amount to trade mark infringement. When influencers use logos, brand names, or other trade marked elements without proper authorisation, they may inadvertently or deliberately create confusion about the source or sponsorship of the products or services being promoted. Such unauthorised uses, if not properly unmonitored, can mislead consumers into believing that there is an official partnership between the influencer and the trade mark owner, which might not exist. Such actions can dilute the brand’s identity and value, potentially resulting in legal disputes and damaging the reputation of both the influencer and the involved businesses.

Confusing similarity:

Using a sign that closely resembles an existing registered trade mark in a way that could confuse consumers could constitute infringement. Such confusion can arise from similar logos, names, or products/services offered under those trade marks, potentially leading consumers to mistake one for the other.

Consider, for instance, a scenario where a tech start-up called ‘AppLinx’ creates a logo closely resembling Apple’s iconic bitten apple and uses a name like ‘iLinx’ to promote its mobile app development services on social media. Users browsing their feed might mistake ‘iLinx’ for an Apple-affiliated service, potentially leading to trade mark infringement issues and confusion among consumers about the origin of the app development services.

Domain name infringement:

It should be remembered that trade marks represent intellectual property rights protecting brands and their associated products or services, but domain names are addresses used to access websites on the internet.

Domain name infringement can occur by registering a domain name that is deemed to be identical or confusingly similar to another party’s trade marked name or brand, known as ‘cybersquatting’. Such similarity can lead to confusion among consumers, potentially diverting traffic away from the rightful owner’s website or causing harm to their reputation.

Take the example of a reputable company, ‘XYZ Clothing,’ which owns the trade mark and domain name ‘XYZClothing.com.’ If another party registers the domain ‘XYZClothing.net’ and uses it to sell counterfeit goods, customers searching for ‘XYZ Clothing’ might stumble upon the ‘.net’ website, purchase lower quality products, and have a negative experience. This confusion and association of poor quality can damage XYZ Clothing’s reputation and lead to a loss of trust among consumers.

In the 2003 case of Harrods Limited v. Pete Lormer (WIPO Case No. D2003-0504), an American individual named Pete Lormer registered the domain name ‘www.harods.com’ which closely resembled the HARRODS registered trade mark. Users entering ‘www.harods.com’ were redirected to ’www.expedia.com’, suggesting a false sense of origin or sponsorship for any associated products, goods, or services. As a result, the Panel of the WIPO Arbitration and Mediation Center concluded that Lormer had registered and used the domain in bad faith, intending to exploit the HARRODS trade mark for commercial gain. Consequently, the Panel ordered the transfer of ‘harods.com’ to Harrods Limited.

Parody accounts:

Another issue that deserves attention is parody accounts mimicking the style of a well-known brand without clearly labelling themselves as satire. It involves social media profiles or online personas that use the likeness of a person, group, or organisation in their profile to discuss, satirise, or share information about that entity. Although this might be considered legal when the account name and profile clearly indicate that it is not affiliated with the original entity, using terms like ‘parody’, ‘fake’ or ‘fan’, can cross legal boundaries when they engage in trade mark infringement, impersonation, or deceptive practices when they mislead users.

One notable example involved a spoof Twitter account, @UKJCP, which operated under the name ‘UKJobCentrePlus not’ and had adapted the job placement body’s official logo. The account mocked Jobcentre Plus and welfare policies, attracting the then Conservative government’s fury. The Department for Work and Pensions (DWP) complained to Twitter that the account was set up ‘with a malicious intent’ to undermine the work of Jobcentre Plus, but Twitter (now X), after initially suspending the account, eventually restored it because it allowed at the time parody accounts as long as they were clearly labelled as such.

Ensuring trade mark integrity amidst digital challenges

In conclusion, the digital landscape offers both vast opportunities and significant challenges for trade mark protection. Navigating trade mark protection in today’s digital age requires a deep understanding of the evolving dynamics of social media and e-commerce platforms. As brands increasingly engage with their audience on platforms like Facebook and Instagram, the risk of trade mark infringement and misleading advertising also increases. Therefore, businesses must implement rigorous measures to protect their trade marks. These measures may include monitoring for unauthorised use, ensuring transparent endorsement disclosures, and working with social media platforms to enforce policies effectively.

To preserve consumer trust and protect brand integrity in today’s competitive digital marketplace, businesses must prioritise transparency and implement robust trade mark protection strategies. A proactive approach to trade mark protection empowers brands, ensuring their sustained success and reputation. At the same time, the active involvement of social media platforms in developing and enforcing trade mark protection policies is essential in enhancing enforcement against trade mark infringement.

About the authors

Laia Montserrat Chávez González is currently in her final semester for a double degree in Law and Economics at Tec de Monterrey in Mexico, and in parallel pursuing an LLM in International Commercial and Business Law at the University of Essex. Laia has advised national and international clients on trade mark registrations and feasibility studies, and managed administrative procedures with the Mexican Institute of Intellectual Property. Passionate about protecting creativity and innovation, she also oversaw intellectual property aspects in transactions, ensuring compliance across different legal systems and facilitating trade mark rights transfers.

Renata Alejandra Medina Sánchez is a lawyer who graduated from the Pontifical Catholic University of Ecuador, with a Senior Specialization in Business Law from the Universidad Andina Simón Bolívar, and a Master’s in Contemporary Contracting from the Universidad Externado de Colombia. She is currently pursuing an LLM in Corporate Social Responsibility and Business at the University of Essex. She holds extensive experience in corporate, contractual, labour, tax, and commercial law. Throughout her career, she has collaborated with both domestic and foreign companies, assisting them from their establishment to the expansion of their operations. Her expertise encompasses involvement in merger and acquisition processes, as well as the drafting and negotiation of contracts.

How Harry Styles’ stalking incident highlights the boundaries of celebrity worship

Image via Wikimedia Commons

A later version of this article was first published by The Conversation on 2 May 2024 and can be read here.

By Alexandros Antoniou, Essex Law School

In our digitally interconnected world, the allure of Hollywood and music sensations captivates millions, drawing admirers into the intimate orbit of their idols. Falling under the spell of a celebrity crush is a common aspect of adolescent development, but today’s heightened accessibility can foster a dangerous sense of entitlement among fans.

The recent conviction of Harry Styles’ stalker, who inundated him with 8,000 cards in under a month, vividly illustrates the alarming consequences of overstepping boundaries in the perceived intimacy between fans and celebrities. Notably, journalist Emily Maitlis, The Crown actress Claire Foy, and TV presenter Jeremy Vine have all experienced similar stalking incidents.

A range of audience engagement

We connect to media figures in different ways, from deeply empathising with a cherished character’s experiences to feeling a sense of closeness with TV hosts who become a familiar presence in our lives. For example, a beloved TV character’s joys and sorrows might deeply resonate with audiences, leading to shared emotional experiences.

Sometimes we immerse ourselves in a character’s narrative to the extent that their joys and sorrows become intimately felt experiences (e.g., a deep sense of sadness when a beloved TV character undergoes a loss), regardless of their disparate backgrounds or life journeys.

Repeated exposure and personal disclosures from media personalities can create a sense of closeness in viewers, despite the lack of direct interaction, as when a TV host becomes a familiar presence in our daily lives. These connections, known as parasocial relationships, thrive on perceived intimacy but lack reciprocity.

Fandom, marked by intense admiration, elevates parasocial relationships to pedestals and becomes deeply ingrained in one’s identity. This devotion can extend beyond individual characters to entire shows or franchises, manifesting in activities like collecting merchandise and engaging with online fan communities.

Our ties to fictional characters, the actors embodying them, and influential media figures vary but collectively form a spectrum of audience involvement. This intricate web of seemingly harmless bonds can morph into toxic obsessions, as seen in the case of Emily Maitlis’ stalker, whose “unrequited love” for the former news anchor led to repeated breaches of a restraining order.

However, it is not merely a gradual escalation of these connections; rather, individuals (possibly battling mental health challenges) may harbour various motivations ranging from vengeance, retribution, and loneliness to resentment, a yearning for reconciliation, or a quest for control. They may hold delusions, such as “erotomania,” believing someone loves them and will eventually reciprocate. Their behaviour might stem from an obsessive fixation on a specific cause or issue.

In the complex realm of fandom culture, the law starts by recognising that beneath the celebrity veneer of flawless posts and red-carpet appearances lies a real person with vulnerabilities. Like everyone, they too deserve a zone of privacy which comprises different layers of protection.

The sanctum core

Picture your life as a mansion, with each room symbolising different facets: thoughts, emotions and personal endeavours. Encircling this mansion is a protective perimeter of a privacy zone, shielding specific aspects of your life from unwanted intrusion, be it by strangers, acquaintances, or the government. Maintaining the integrity of these restricted areas is left to a mixed legal environment encompassing civil remedies and criminal offences, including racially or religiously aggravated variants.

Secretly monitoring someone’s activities or lingering around their home without valid cause gravely endangers this zone. Claire Foy’s stalker, who had become “infatuated” with the actress, received a stalking protection order after appearing uninvited at her doorstep, leaving her “scared” of her doorbell ringing and feeling “helpless” in her own home. Sending unsolicited “gifts” is also associated with stalking, as demonstrated by Styles’ relentless pursuer who sent countless unsettling letters and hand-delivered two to the singer’s address, causing “serious alarm or distress”.

An intimate ecosystem

Importantly, the mansion’s private enclave embodies more than an inner sanctuary where people can live autonomously while shutting out the external world. Our private sphere also safeguards our personal growth and ability to nurture relationships, constituting a “private social life.”

When stalking rises to the level of inducing fear of violence or has a “substantial adverse effect” on someone’s regular activities, e.g., forcing a celebrity to make significant changes to their lifestyle, the law steps in to protect victims, including innocent bystanders who might experience direct intrusion themselves.

For example, Emily Maitlis’ stalker showed “breath-taking persistence” in contacting his victim and her mother, while Foy’s stalker had emailed the actress’ sister and texted her ex-boyfriend. Such conduct warrants legal intervention because it can severely impair someone’s ability to freely establish normal social networks and ultimately increases isolation, amplifying the disruptive impact on their support systems.

Advancements in communications technology have driven the surge in “cyberstalking”. For example, presenter Jeremy Vine’s stalker “weaponised the internet”, sending relentless emails identifying his home address and instilling fear for his family’s safety. Such digital variations of traditional stalking might also be pursued through communications offences, including the newly enacted “threatening communications” offence.

FOUR indicators

Behaviours may vary but they frequently exhibit a consistent pattern of Fixated, Obsessive, Unwanted and Repeated (FOUR) actions, violating not only a person’s inner circle privacy zone but also the outer sphere of their private social life.

While rooted in natural admiration for talent and charisma, celebrity worship can blur the line between harmless adoration and harmful obsession, particularly in an age dominated by social media that gives unprecedented access to our favourite stars. Legal boundaries delineate genuine appreciation from repetitive, oppressive conduct that jeopardises someone else’s well-being.

The Anatomy of Impact: A Conversation with Professor Lorna Woods

Photo by Joshua Hoehne on Unsplash

By Professor Carla Ferstman, Director of Impact, Essex Law School

As academics, we conduct research for all sorts of reasons. We seek to advance knowledge and innovation in the areas in which we specialise, and we try to make connections with research being done in other disciplines for the purpose of enhancing our understanding of and contributing to address cross-cutting, complex challenges.

Academic research is increasingly being applied outside of academia to foster external impacts in our communities and societies. Research-led teaching can also foster the opportunities for cutting-edge, student learning.

The UK Research Excellence Framework values world-leading research that is rigorous, significant and original. It also encourages and rewards research that generates impact, which it understands as “an effect on, change, or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia” (REF2021).

Impactful research is particularly relevant and important for the discipline of law, where colleagues’ work can lead to changes in how justice is perceived and how access to justice can be better achieved. Academic research in law has led to and influenced the direction of law reform and academic findings have also been applied authoritatively in court judgments. Legal research has also led to the development of new policies, and regulatory frameworks in the UK and internationally.

Despite the importance many legal academics place on generating impact, the route to impact is not obvious. Achieving impactful academic research defies a one-size-fits-all formula, though certain key pointers are invaluable:

First, impactful research is generated by academics who produce excellent, groundbreaking research.

Second, academics should be mindful of who (e.g., community stakeholders, policy-makers, decision-makers) would benefit from knowing about the research and should develop a strategy to ensure they effectively disseminate their findings.

Third, academics seeking to generate impactful research should be actively engaging with those who can benefit from their research, adapting their approach based on stakeholder needs and circumstances.  

Learning from example

Academics can glean wisdom from exemplary models. And there is no better example than Professor Lorna Woods, whose research contributed significantly to the Online Safety Bill (now Online Safety Act 2023) and led to her being awarded an OBE for services to internet safety policy.

I sat down with Professor Woods to get a clearer understanding of her trajectory – how she got from A to B to C (or indeed, from B to A to F to C), to better appreciate the time her ideas took to percolate and the challenges she faced along the way.

I wanted to understand whether her research was picked up by government by happenstance, by carefully, plodded planning, or some other combination. I also wanted to know whether there was any magic formula she could share to generating impactful research.

Lorna qualified as a solicitor and worked in the early 1990s for a London city firm, where she was exposed to a variety of areas of law, including international trade, competition, and commercial law. She began to work with two of the partners on matters involving regulation, intellectual property, and media. She happened to be at the firm when many developments  in the law occurred, such as the Broadcasting Act 1990, up-dates in data protection rules, and other changes as a result of growing public access to the internet.

This quickly developed into a specialism related to technology. “The work was really interesting. It wasn’t just the typical due diligence or deals management work that one often received in a corporate solicitor’s firm, there was a space to think and a space to have your say”.

Also, during this time, Lorna did some consulting work for the European Commission in Eastern European countries following the political changes in the early 1990s, focused on media freedom and public service broadcasting, which involved new thinking about the rights of the public audience that had not yet been theorised.

Lorna left the firm after about five years when, as often happens, she began to take on a more supervisory role, with some of the most interesting pieces of work being delegated to more junior colleagues. She pursued an LL.M degree at the University of Edinburgh (legal theory and human rights, with a dissertation on federalism and the European Union) and began to apply for academic roles. She secured a position in 1994 at Sheffield and began teaching EU and public law.

The Eureka moment or more of a slow-burner?

Gradually Lorna’s research began to drift back to media law and data protection, incorporating areas she had been studying around human rights, public speech, surveillance, and the rights of journalists, but with her own take. She recalled that “A lot of people were talking about journalists’ rights, but I was focussed on the rights of the companies who were transmitting; an ‘essential facilities’ argument but approached from a rights perspective. I also started looking at these issues from the perspectives of EU law and the free movement of cultural standards [the rights of the audience] rather than simply as an issue of freedom of expression.”

Central to this was the idea that there were different actors in an information environment – the speakers and the audience, and something in the middle which had more to do with the platform, that is not really seen or thought about. The question Lorna had was whether these entailed separate rights or were all part of a unified right to information.

In 2000, Lorna was collaborating with Professor Jackie Harrison at Sheffield and they began researching new media and media regulation, and again, this is where she conceptualised further her thoughts on the rights of the audience not only to have access to information, but to information that was reasonably reliable, and where possible, to a diversity and plurality of sources.

This also connected to her thinking about how to find information on the internet, who curates what we can find and what responsibilities may be attached to the curation. The flip side to this was considering the nature of states’ positive obligations to provide a safe online environment. Lorna also began to explore issues around usergenerated content.

In response to the growing awareness of how female politicians and activists were being targeted on Twitter (now X), and the notoriety of the abuse faced by Caroline Criado Perez and Walthamstow MP Stella Creasy, Lorna started looking at what controls were in place, and began to consider the gaps in regulation and how they could best be addressed.

At the time, she observed that politicians had embraced Twitter, amplifying their influence while also making them more accessible and exposed. The platform facilitated direct communications between everyone on the network, including with unsavoury individuals who were using the platform as a form of abuse. This was fuelled by anonymous accounts, hashtags that allow you to jump on the bandwagon, and little seeming moderation at that stage. There were many instances of public-facing women receiving rape and death threats.

In consequence, there were several instances in which users were being charged in the UK under section 127 of the Communications Act – a low-grade offence which criminalises the sending, via a “public electronic communications network”, of a message which is “grossly offensive or of an indecent, obscene or menacing character”. But it was never clear to Lorna that using the criminal law was the best solution to the problem.

The campaign for law reform begins to take shape

Around 2015, Lorna became aware that the then Labour MP Anna Turley MP was developing a private member’s bill:  the Malicious Communications (Social Media) Bill. Someone whom Lorna had met in an unrelated capacity – “this is just really a feature of when you work in a certain area, you meet people linked to that area. And progressively, your army of contacts comes back to help” – William Perrin, managed to get her in the door to meet the MP.

Together, Lorna and William helped to draft the Bill. The goal was to give users better tools (user empowerment features and functionalities) so that they could filter and triage incoming content, at least as a starting point for improving the online environment. Their advice (which was taken on board) was not to remove platform immunity for third-party content; they recognised that the platform providers were offering an important service worth protecting.

Part of the rationale for this was the connections they saw between internet platform providers and telecoms providers: “If you were to hold a telecoms provider responsible for anything communicated on the service, they would become very cautious and ultimately it would shut down the service.  So, there was a need for caution.” Ultimately the Bill did not progress because private members’ bills rarely do but they operate to bring matters to the attention of the Government and can be part of a campaign for change.

Subsequently, the Government published a Green Paper on internet safety in 2017, where significant concerns were raised. This was the era of Cambridge Analytica and misinformation, but there were also concerns about child pornography and online bullying, and the algorithms prioritising content to vulnerable users stemming from the tragic Molly Russell case.  The Green Paper seemed to revisit the recommendation to remove (or significantly restrict) platform immunity for third-party content, which Lorna and William did not think was the best approach, for the reasons already stated.

There was a need to conceive of the problem at the systems level, rather than merely focusing on isolated items of content. For example, the scale of the problem invariably was not about the individual offensive posts but that the content was quickly able to go viral without appropriate controls, aided by functions like the “like” button, and the availability of anonymous, disposable accounts.

Similarly, the recommender algorithm which optimised certain posts for engagement tended to privilege the most irrational, emotional posts which were more likely to promote hatred or cause offence. Making small changes to these kinds of features and investing more in customer response, could significantly improve online safety.  Thus, according to Lorna, there was a certain recklessness in the product design that needed to be addressed – this was the genesis of the idea of a statutory duty of care. 

Paws for thought: remembering Faith, Lorna’s beloved cat who ‘Zoom-bombed’ video calls during lockdown and contributed much to debates on online safety

The statutory duty of care

Lorna and William produced a series of blogs and papers outlining this position, and the need for such reforms was also underscored by Lorna during an oral evidence session at the House of Lords inquiry into the regulation of the internet. The Carnegie UK Trust stepped up to champion Lorna and William’s work, facilitating its progress.

The UK Department for Culture, Media and Sport (DCMS) invited Lorna to give a briefing, and it became clear that there was some confusion. The DCMS had been under the impression that the conditionality of the platform immunity amounted to a statutory duty of care. Consequently, part of what Lorna and Will tried to explain was how their proposal was compatible with the principle of platform or intermediary immunity. The proposal was not seeking to impose liability on the platform for user content but instead, focused on requiring platforms to ensure product design met their duty of care to users. These discussions with DCMS continued, and progressively intensified.

The White Paper which was ultimately released in April 2019 clearly articulated that “The government will establish a new statutory duty of care to make companies take more responsibility for the safety of their users and tackle harm caused by content or activity on their services,” and outlined what that duty of care would look like and how it would be regulated.  

Changes within the Tory leadership ultimately delayed progress. There were also concerns raised by some of those in the free speech lobby who saw parts of what was being proposed as censorship.  Lorna’s background in freedom of speech helped her respond to those concerns: “I was concerned that freedom of speech was being used as a slogan. When you look at any right and you look at it in isolation, you are then implicitly privileging it. And here, it was important not just to consider the rights of the ‘speaker’ but the rights of all the other users as well, some of whom are extremely vulnerable.” 

These points align with what the UN Special Rapporteur on Freedom of Opinion and Expression explained in her 2023 report on gendered disinformation, who notes, citing Lorna’s submission, that “Systemic regulation, which emphasizes ‘architecture over takedown’, allows for more proportionate responses and is likely to be better aligned with freedom of expression standards.”

Certainly, companies were lobbying in other directions and the Act reflects some corporate compromises, such as the need for the duty of care to be applied proportionately, to account for the different levels of resources of the regulated company. But there were powerful counter-arguments, and the NSPCC and other organisations were effective allies particularly on the need for clear duties of care in relation to child users. The Daily Telegraph also ran an important campaign on the legislation. The Government at one point sought to restrict the Act to concerns about children, so this became part of the campaign to maintain a focus also on harm to adults (unfortunately only limited protections were maintained). There are other parts of the Act which differ from what Lorna and William had proposed, such as dividing up the regulatory framework by reference to certain types of conduct. Inevitably there were compromises.

The Act as adopted envisages that the communications regulator Ofcom will produce guidance and codes which will explain what internet platforms must do in order to operate in the United Kingdom. There are ongoing consultations regarding these texts. Once the guidance and codes are in place, companies will be given a period (three months) to align their practice to comply with the requirements. Thereafter, the duties of care will become binding.

Some of the companies appear to be arguing that a duty of care is too vague a standard, however this is hard to accept, given that it is a recognised legal standard. The goal for Lorna and others is therefore to ensure that the duty of care standard is made operational in such a way that it provides clear and adequate protections; it should be more than a ‘tick the box’ exercise.

I asked Lorna how this legislation would tackle the activities of companies operating outside of the UK, but with impacts in the UK. She explained that parts of the Act have extraterritorial effect, to the extent that company activities are directed at or have impacts in the UK. Some companies have introduced policies for different geographical regions to address the requirements of national legislation, so this is a possibility for multinational internet platforms accessible to UK users.  

I also discussed with Lorna whether she believed individuals like Molly Russell would be more effectively safeguarded now that the Online Safety Act is in force. She explained that Molly would not be better off today, because the guidance and codes are not yet in place. “Maybe in a year’s time, she would probably be better protected, as a child. I think an 18-year-old Molly would be sadly let down by the regime, which should be more robust.”

Given the clear synergies with her work on the Act, Lorna is also progressing with work on online gender-based violence, and some work on gender-misinformation, incel and extremism. As she looks deeper into these critical areas, it becomes evident that her ongoing endeavours reveal new challenges and fresh avenues for advocacy and change.

New communications offences enacted by the Online Safety Act 2023

Photo by Ravi Sharma on Unsplash

Dr. Alexandros Antoniou, Essex Law School

The Online Safety Act 2023 (OSA) introduced a range of measures intended to improve online safety in the UK, including duties on internet platforms about having systems and processes in place to manage illegal and harmful content on their sites. On 31 January 2024, Part 10 of the Act came into effect, introducing a series of new criminal offences which represent a significant leap forward in tackling complex challenges surrounding online communications safety.

Section 179 of the OSA establishes the criminal offence of sending false communications and seeks to target, among others, internet trolls. It is now deemed an offence if an individual (a) sends a message containing knowingly false information; (b) intends, at the time of sending, to cause non-trivial psychological or physical harm to a likely audience; and (c) lacks a reasonable excuse for sending the message. Recognised news publishers and broadcasters are exempt. The offence does not apply to public screenings of cinema films either. It can be committed by individuals outside the UK if they are habitually resident in England, Wales, or Northern Ireland. Penalties include imprisonment for up to six months, a fine, or both. It is hoped the new offence will help clamp down on disinformation and election interference online.

Section 181 establishes the criminal offence of sending threatening communications. This is committed when an individual sends a message containing a threat of death, serious harm (e.g. bodily injury, rape, assault by penetration), or serious financial loss, with the intent to instil fear in the recipient that the threat will be carried out (whether by the sender or someone else). In cases of threats involving financial loss, a defence is available if the threat was used to support a reasonable demand, and the sender reasonably believed it was an appropriate way to reinforce that demand. This offence applies to individuals residing in England, Wales, or Northern Ireland, even if the sender is located outside the UK. Penalties include up to five years of imprisonment, a fine, or both. In March 2024, Essex law enforcement achieved a significant milestone by obtaining one of the earliest convictions under the new OSA, resulting in an eight-month jail sentence for Karn Statham. Statham harassed a woman by sending threatening messages and making repeated visits to her address after being instructed to cease contact.

A new criminal offence under section 183, dubbed “Zach’s law”, aims to protect people from “epilepsy trolling”. The campaign against such conduct began when eight-year-old Zach, who has epilepsy, was raising funds for the Epilepsy Society. Trolls inundated the Society’s profile with images and GIFs meant to induce seizures in people with epilepsy. While Zach was unharmed, others with the condition reported seizures after engaging with the fundraiser online. The Act creates the offence of deliberately sending or showing flashing images to individuals with epilepsy with the intent to cause harm, defined as inducing a seizure, alarm, or distress. Particular conditions (specified in the Act) must be met before a conviction is secured, both in respect to sending and showing flashing images electronically. Recognised news publishers, broadcasters, public screenings of cinema films as well as healthcare professionals cannot be guilty of this offence (which can similarly be committed by individuals outside the UK if they are habitually resident in England, Wales, or Northern Ireland). Penalties include imprisonment for up to five years, a fine, or both.

Moreover, section 184 outlaws encouraging or assisting serious self-harm. To be guilty of this offence, an individual must perform an act intended to encourage or assist serious self-harm in another person, whether through direct communication, publication or sending (or giving) items with stored electronic data. Serious self-harm encompasses actions leading to grievous bodily harm, including acts of omission such as encouraging someone to neglect their health regimen. The identity of the person harmed need not be known to the offender. The offence can occur regardless of whether self-harm is carried out and it is irrelevant who created the content in question (it is the sending that matters). The offence is punishable by imprisonment for up to five years, a fine, or both, and likewise, it applies to individuals habitually resident in England, Wales, or Northern Ireland, even if they are outside the UK.

Cyber-flashing on dating apps, AirDrop and other platforms will also result in perpetrators facing up to two years in prison. Section 187 of the Act introduces a new offence under the Sexual Offences Act 2003 pertaining to the sending of photographs or films of a person’s genitals to another individual. A person (A) is deemed to commit the offence if they intentionally send or provide a photo or video of another person’s genitals to another individual (B) under the following conditions: either A intends for B to view the genitals and experience alarm, distress, or humiliation; or A sends or provides such material with the aim of obtaining sexual gratification and is reckless as to whether B will experience alarm, distress, or humiliation. “Sending” covers sending through any means, including electronic methods, showing it to another person, or placing it for someone to find. A conviction for this offence could also lead to inclusion on the sex offenders’ register. In February 2024, an Essex Police team secured the UK’s first cyber-flashing conviction, with Nicholas Hawkes pleading guilty to sending explicit images via WhatsApp to cause distress. On 19 March 2024, Hawkes was sentenced to 66 weeks in prison. He was also made subject to a restraining order for 10 years and a Sexual Harm Prevention Order for 15 years.

Finally, the OSA repeals the legislation first introduced to tackle ‘revenge porn’ offences (sections 33-35 of the Criminal Justice and Courts Act 2015) and introduces a set of intimate image sharing offences. Specifically, section 188 of the OSA introduces a new base offence of sharing of intimate images without consent, carrying a penalty of imprisonment for up to six months. This applies when an individual intentionally shares an image portraying another person in an intimate context without their consent and without a reasonable belief in consent. Two more serious offences are established on top of that, both reflecting the offender’s higher culpability and carrying greater penalties: namely (a) intentionally causing alarm, distress, or humiliation to the person in the image; and (b) seeking sexual gratification from the act (these are outlined in sections 66B(2) and (3) of the Sexual Offences Act 2003). Threatening to share an intimate image of a person has also been made an offence where the perpetrator either intends to cause fear that the threat will be carried out or acts recklessly in doing so (this is found under section 66B(4) of the aforementioned 2003 Act). The new offences also fall under the sexual offender notification requirements. These new intimate image offences are also designed to tackle “deepfakes” and “down-blousing” (i.e. capturing images typically of a person’s chest area, from a downward angle, often without their knowledge or consent). They also come with various exemptions (outlined under section 66C of the Sexual Offences Act 2003), e.g. where the photograph or film involves a child and is of a kind normally shared among family and friends.

While there is some overlap between existing offences, the new offences consolidate previous ones or address gaps. For example, the intimate image sharing offence widens the meaning of the photographs or films, from “private sexual” to “intimate” and makes it easier for those caught sharing such content online without the other person’s consent to be prosecuted, as it removes the requirement for any harm to be intended to the subject of the photograph or film. The updated guidance of the Crown Prosecution Service aims to delineate the appropriate charge for each circumstance. The introduction of the new offences is anticipated to fortify protections against online misconduct.


This article was first published on the IRIS Merlin database and is reproduced here with permission and thanks.

Navigating freezones in the influencerdom: a shadowlands guide

Photo by Ronald Cuyan on Unsplash

By Dr. Alexandros Antoniou, Essex Law School

Influencer marketing has emerged as a formidable force in the realm of advertising, wielding substantial power to sway consumer behaviour and shape brand perceptions. Leveraging the credibility and reach of social media personalities, brands can effectively tap into niche audiences and foster authentic connections.

Despite its undeniable impact, there remains a notable lack of comprehensive research and regulatory oversight surrounding influencer marketing practices. As the landscape continues to evolve rapidly, it becomes increasingly imperative for regulators to delve deeper into this field in order to safeguard followers’ interests and maintain the integrity of digital advertising ecosystems in which influencers operate.

My new research looks at the rapidly evolving landscape of influencer marketing and its profound effects on the dynamics between social media users, advertisers, and brands. In my new article, I demonstrate that influencers have transcended the dichotomy of self-publishers vs traditional advertisers, shaping distinct career trajectories.

With the burgeoning influencer industry in mind, I critically examine the regulatory landscape, particularly the responsiveness of the Advertising Standards Authority (ASA) and the Competition and Markets Authority (CMA) to influencers’ professionalisation.

Despite the industry’s growth, regulatory gaps persist, leaving consumers vulnerable to lightly-overseen influencers. I caution that regulators rely on antiquated tools, allowing newcomers in the industry to fly beneath their radar.

For instance, the established advertising rule to make clear that ads are ads predominantly applies to those influencers who have forged brand partnerships. However, I argue that early career influencers who may not monetise their content, still wield significant influence. They have a remarkable knack of cultivating genuine connections that bestow hidden promotional content with an unmatched aura of trustworthiness.

I conclude that, from a regulatory standpoint, we are not seeing influencers’ increasing professionalisation. I advocate for a transformative shift in regulatory perspective to encompass influencers throughout their career journey, challenging the prevailing notion that only high-reach influencers warrant scrutiny.

Therefore, I emphasise the need for a recalibrated regulatory threshold that accounts for emerging influencers, endorsing a more comprehensive definition and a holistic approach that recognises the multifaceted nature of IM practices.

My article, published in the Journal of Computer, Media and Telecommunications Law (Vol. 29, Issue 1, pp. 8-21) urges regulators to adapt to the nuanced and evolving nature of influencer marketing to ensure a more robust oversight and integrity in this emerging profession.

Brianna Ghey’s Murder: Unpacking Transphobia, Offender Anonymity, and the Impact of Sentencing Remarks

By Dr. Dimitris Akrivos, University of Surrey, and Dr Alexandros Antoniou, University of Essex

This blog post first appeared on The International Forum for Responsible Media Blog on February 27th 2024.

Photo via Shutterstock

The death of 16-year-old Brianna Ghey at Culcheth Linear Park in February 2023 sent shockwaves across the United Kingdom. On 20 December 2023, Scarlett Jenkinson and Eddie Ratcliffe were found guilty of Brianna’s murder, subsequently receiving life imprisonment sentences on 2 February 2024.

From the brutality of the crime to the debate over whether the perpetrators’ names should have been published and the speculation about the potential influence of violent media on their actions as to whether their acts had been influenced by violent media, this case is reminiscent of James Bulger’s murder over three decades ago. A notable difference, however, is that the victim in this case was a transgender girl.

Brianna’s murder against the backdrop of the trans rights debate

Official figures reveal a concerning surge in police-recorded transphobic hate crimes in England and Wales in recent years (11% up from the year before in 2022/23 and a staggering 186% rise over the last five years).  The latest Home Office report acknowledges that comments made by politicians and incendiary media discussions on trans issues might have contributed to this trend. In the current socio-political climate, where the polarisation between trans and women’s rights groups over gender self-identification can reach ‘toxic’ levels, there is a serious risk that victims like Brianna Ghey will – as the domestic abuse commissioner Nicole Jacobs warned – be ‘denied their dignity’.

Recognising the role transphobia has played in this violent crime is vital to tackling that risk. Yet, The Times were quick to ‘deadname’ Brianna, i.e. report the news of her murder using the victim’s pre-transition (male) name, triggering a strong backlash by trans advocates. Similarly, BBC News and Sky News also faced criticisms for initially failing to mention the victim was trans. Meanwhile, Fair Play for Women, a gender-critical campaign group which views sex as immutable, argued that the victim’s transgender identity was not relevant to stories about her murder and should have been omitted from them. Notably, Cheshire police did not consider the murder to have been motivated by hatred against Brianna’s transgender identity. DCS Mike Evans explained that Jenkinson and Ratcliffe had previously discussed killing other children, suggesting that, had they not been able to kill Brianna, they would have found another victim.

Why did Brianna’s murderers not remain anonymous?

Due to the defendants’ age, restrictions were in place throughout the trial to prevent the publication of any information likely to reveal the identities of the two perpetrators as the defendants in these proceedings. However, some controversy arose when the decision was made to publicly name the two teenagers at their sentencing. Mrs Justice Yip took the unusual step to revoke anonymity orders shielding the assailants’ identities, following an application by press representatives.

As there has been some misunderstanding around this issue, it is worth explaining how the anonymity orders worked in Brianna’s case. It will be recalled that the two perpetrators were tried before the Manchester Crown court, which is an adult criminal court – not a youth court (of note, a young person charged with murder cannot be tried or sentenced by a youth court because of the seriousness of the charge).  While there is no automatic ban on identifying individuals under 18 as being concerned in the proceedings of adult criminal courts, section 45 of the Youth Justice and Criminal Evidence Act 1999 empowers criminal courts to grant anonymity to a juvenile defendant, victim or witness in adult criminal proceedings while they remain under the age of 18. This power is not available to youth courts. The intention of Parliament in enacting this provision was to widen the scope of protection available to under-18s.

Section 45 allows an adult criminal court to impose a discretionary reporting restriction. If the court so wishes, it can choose to impose no restrictions at all. The law draws, therefore, a distinction between young people appearing in youth courts, who are automatically entitled to anonymity, and those appearing in adult criminal courts, who must seek a discretionary reporting restriction.

This is critical. It means that in a youth court, there must be a good reason for lifting the anonymity order which applies by default, whereas under section 45 of the 1999 Act, there must be a good reason for imposing – or continuing with the imposition of – the anonymity order. So, in the case of section 45, there is a strong presumption in favour of open justice, placing the burden of justifying reporting restrictions on the party seeking to derogate from this fundamental principle.

The defendants in Brianna Ghey’s case, both 16 at the time of their conviction, would lose the anonymity protection upon reaching adulthood in 2025 by operation of the law. In the meantime, however, a court may consider lifting or relaxing restrictions in two circumstances: either when the court is satisfied that doing so is ‘necessary in the interests of justice’ (section 45(4)); or when it is satisfied that the reporting restriction unduly limits the coverage of the proceedings and it is ‘in the public interest’ to remove or modify the restriction (section 45(5)). A list of factors to be considered in an assessment of where the public interest lies in such situations is provided in section 52 of the Act.

No judge takes such decisions lightly. As the Court of Appeal has previously emphasised, judges are tasked with meticulously weighing the competing public interest factors at play on the particular circumstances before them. So, neither the open justice principle nor a young person’s best interests automatically dictate the conclusion in a given case. Pre-conviction and during the trial, a defendant’s welfare is likely to take precedence over the public interest in disclosure. However, post-conviction and sentencing, factors such as the offenders’ age and the severity of the crime acquire particular relevance in determining whether publication is warranted.

As Mrs Justice Yip observed in Brianna’s case, ‘the shock generated by [her] murder and the circumstances of it has spread well beyond the local community, across the nation and indeed internationally. The public will naturally wish to know the identities of the young people responsible as they seek to understand how children could do something so dreadful. Continuing restrictions inhibits full and informed debate and restricts the full reporting’ of an ‘exceptional’ case.

But the lifting of the discretionary reporting restrictions under section 45 was driven not only by the sustained public interest in knowing the identity of Brianna’s murderers, but also because of the likelihood of continued media attention regardless of the timing of disclosure as well as the defendants’ extended custody and rehabilitation process into adulthood. While acknowledging the distress to the defendants’ families, Mrs Justice Yip underlined that the powers under section 45 were not designed for convicted defendants’ family members, and the risk of harassment to the defendants’ families was deemed likely regardless of the timing of identification. It was the combination of all these considerations that favoured publication.

Sentencing in Brianna’s murder as a catalyst for confronting transphobia

Brianna’s murderers were named the day they were sentenced for her murder. Even though Cheshire police had dismissed transphobia as a motivating factor, Mrs Justice Yip expressly recognised in her sentencing remarks that the crime had been, at least partly, driven by hostility towards Brianna’s trans identity. Distinguishing between the young offenders’ motivations, the judge determined that Jenkinson was primarily seeking to act out her ‘sadistic’ fantasies and had a ‘deep desire to kill’ while Ratcliffe was, in part, driven by transphobic sentiments. This hostility towards trans people had, according to the judge, been ‘undoubtedly displayed’ in the dehumanising language Ratcliffe used in the WhatsApp messages he had sent to Jenkinson, in which he described Brianna as a ‘femboy thing’ or ‘it’, revealing that he wanted to ‘see if it will scream like a man or a girl’.

Such messages make for a harrowing read and it is easy or even convenient for our society to brush off the transphobia reflected in them as merely the hateful words of one ‘bad apple’. The truth is, however, that Brianna Ghey’s murder has shed light on a harsh reality: abuse often becomes a distressing aspect of vulnerable trans individuals’ lives, even if this does not always escalate to extreme violence. The Conservative Government’s and the UK mainstream media’s trans-othering rhetoric has been repeatedly criticised by several international human rights organisations. Indicatively, the Council of Europe’s Commissioner for Human Rights, Dunja Mijatović, warned of the risks deriving from an ‘increasingly toxic’ anti-trans political and media discourse built upon ‘deeply discriminatory stereotypes […] based on ideas of predatory determinism.’ This ‘culture war’ against trans people has also been cited by the International Lesbian, Gay, Bisexual, Trans and Intersex Association as one of the reasons behind the UK’s continuous drop in its annual rankings for LGBT rights across Europe.

During PM Questions on 7 February 2024, Rishi Sunak faced a backlash after his remark about Labour leader Keir Starmer’s purported difficulty in ‘defining a woman’ while Brianna’s mother was in the public gallery during the exchange. Trans allies, including Brianna’s father Peter Spooner, expressed ‘shock’ and ‘disgust’ towards the PM’s ‘degrading comments’, calling for an apology which Sunak has refused to offer. Amid the increasing tensions between the two main political parties, it is vital that trans people’s lives are not reduced to a bargaining chip in their bid to win the upcoming general election. Despite the tragic circumstances surrounding Brianna’s murder, her story has the potential to catalyse a wider and more constructive dialogue on the consequences of ‘othering’ an already marginalised community. There are undoubtedly valuable lessons to be gleaned from this landmark case. The pertinent question remains: are our leaders prepared to heed them?

Dr. Dimitris Akrivos, University of Surrey, d.akrivos@surrey.ac.uk, Dr. Alexandros Antoniou, University of Essex,  a.antoniou@essex.ac.uk

Essex Law School Expert Praised in House of Lords for Work on Online Safety Legislation

Photo by Marcin Nowak on Unsplash

Essex legal expert Lorna Woods has earned special recognition in the House of Lords thanks to her research and work supporting the landmark Online Safety Bill. The Bill successfully passed through Parliament and is now enshrined into law, having received Royal Assent on Wednesday 26 October 2023. The Act makes social media companies keep the internet safe for children and give adults more choice over what they see online.

Professor Woods has helped influence the bill after famously writing some of its founding principles on the back of a sandwich packet with the help of William Perrin, of the charity Carnegie UK, several years ago.

Professor Woods has continued to work with Carnegie throughout the last few years and provided expert advice to backbenchers and members of the House of Lords.

She was personally thanked following the final debate in the Lords by Lord Stevenson for her work on the bill.

Lord Clement-Jones added: “I pay my own tribute to Carnegie UK, especially Will Perrin, Maeve Walsh and Professor Lorna Woods, for having the vision five years ago as to what was possible around the construction of a duty of care and for being by our side throughout the creation of this bill.”

Professor Woods has become a high-profile commentator on the bill throughout its passage on Parliament, and recently recounted the “surreal moment “it was approved by the Lords in an interview with the BBC Online.

In a separate interview with Wired, Professor Woods responded to criticisms of the bill by insisting it would help protect the human rights of children being exploited and abused online.

She was also quoted in the New York Times’ own coverage of the Bill, and has also appeared on BBC Radio Five Live.

Professor Woods said: “The Bill is significant as it marks a move from self-regulation – where service providers decide what is safe design and whether to enforce their community standards – to regulation under which services are accountable for those choices.”


This story was first published on the University of Essex’s news webpages and is reproduced on the ELR Blog with permission and thanks. The story was edited to reflect the fact that the Bill received Royal Assent.

The Online Safety Bill: Where Are We Now and Will It Succeed?

Image via Shutterstock

The House of Lords is currently debating at Committee Stage the Online Safety Bill, a landmark piece of legislation which introduces a new set of internet laws to protect children and adults from online harms.

The Bill will establish a regulatory framework for certain online services. These include user-to-user services, such as Instagram, Twitter and Facebook, and search services, such as Google.

The UK government’s stated aim in introducing the Bill is “to make Britain the best place in the world to set up and run a digital business, while simultaneously ensuring that Britain is the safest place in the world to be online”.

The BIll will place duties of care on both regulated user-to-user service providers and regulated search service providers. The regulated service providers would have duties relating to, among other things: (a) illegal content; (b) protecting children; (c) user empowerment; (d) content of democratic importance, news publisher content and journalistic content; (e) freedom of expression and privacy; and (f) fraudulent advertising.

The Bill also does two other distinct but interconnected things. It introduces age-verification requirements in relation to pornography providers (which are not user-to-user); as well as new criminal offences, e.g., encouraging self-harm and epilepsy trolling.

This makes it a long, wide-ranging and complex Bill.

Moreover, the Bill will place more responsibility on technology giants to keep their users safe. It will give Ofcom, the UK’s communications regulator, the power to levy fines against non-compliant providers, and would make senior managers liable to imprisonment for not complying with a direction to provide Ofcom with information.

But what impact is the BIll expected to have? What concerns are there about the implementation of this new regime?

Prof. Lorna Woods (Professor of Internet Law, University of Essex), who devised the systems-based approach to online regulation that has been adopted by the Government and whose work is widely regarded as laying the groundwork for the UK’s Online Safety Bill, was recently interviewed on this new regulatory approach.

Photo by Austin Distel via Unsplash

On 11 May 2023, Prof. Woods stepped inside BBC Radio 4’s Briefing Room to be interviewed by David Aaronovitch. She talked about what is actually in the Bill, how the new internet laws are intended to work and what potential weaknesses still remain. The programme can be accessed here.

Prof. Woods also joined Conan D’Arcy of the Global Counsel tech policy team to talk about the UK tech regulation, discuss recent criticisms of the Online Safety Bill, as well as the regulation of generative AI tools like ChatGPT. You can listen to the podcast here (published on 17 May 2023).

New Standards Code launched by press regulator IMPRESS

Photo by Jon Tyson on Unsplash

By Alexandros Antoniou, Essex Law School

On 16 February 2023, the press regulator IMPRESS launched its new Standards Code, with key changes including guidance on AI and emerging technologies, stricter measures on tackling misinformation, stronger safeguarding guidelines, and a lower discrimination threshold.

Background

IMPRESS is the only British press regulator to have sought formal approval from the Press Recognition Panel (PRP). The Panel was established in the aftermath of the phone-hacking scandal to ensure that any future press regulator meets certain standards in compliance with the Leveson report recommendations. IMPRESS is distinct from the Independent Press Standards Organisation (IPSO), Britain’s other press regulator which enforces the Editors’ Code of Practice but does not comply with the majority of the Leveson report’s independence requirements. IPSO regulates some of the more established UK press (e.g., the Mail newspapers, the News UK titles and their respective websites), whereas publishers regulated by IMPRESS tend to be newer and more digitally focused (e.g., Bellingcat, Gal-dem and The Canary). IMPRESS is viewed by some media campaigners (e.g., Hacked Off) as “the most popular” complaints-handling body in the country. Its membership has risen from just 26 publishers in 2017 to 113 today.

The IMPRESS Code was first published in 2017 with the aim of guiding media professionals and protecting the public from unethical news-gathering activity. It applies to all forms of news delivery, including print publications, news websites and social media, and to any individual or organisation gathering information and publishing news-related content. As the media landscape has rapidly evolved in the last few years, changes were introduced in February 2023 to help build trust and improve accountability in the industry, while covering a more diverse range of digital news creators (including publishers, editors, journalists, citizen journalists, reporters, bloggers, photojournalists, freelancers, and content creators) and their practices.

Some key changes

A major change concerned the issue of inaccurate content and was propelled by the challenges faced in distinguishing true information from misinformation and disinformation, including that generated by AI. To help journalists and publishers ensure that their material is supported by verifiable and legitimate sources, the Code and its associated Guidance on Clause 1 (Accuracy) and Clause 10 (Transparency) provide advice on fact checking and source verification, particularly within an online context. Specifically, the Code now requires publishers to exercise human editorial oversight to ensure the accuracy of any AI generated content, clearly label such content, and take reasonable steps to limit the potential spread of false information (deliberately or accidentally) by verifying the story with other sources and checking the information against other reliable sources.

Changes were also introduced in relation to the coverage of news stories involving children. They all acknowledge children’s media literacy, autonomy, and protections that are necessary to develop them as people. The revised Code defines a child as anyone under the age of 18 and places an obligation on publishers to “reasonably consider” requests from children to remain anonymous during news-gathering and publication (Clause 3.3), as well as requests from those under 18 when the article was published to anonymise that news content in the present day (Clause 3.4). This is a welcome recognition of the proposition that individuals should not be adversely affected later in life because stories that concern them as children remain widely available online. Importantly, under the new Code, an appropriate adult cannot veto a child’s refusal or revocation of consent (paragraph 3.1.2 of the Guidance to the Code).

Because of the internet and social media, publishers must also take extra care not to identify children indirectly through “jig-saw identification”, i.e., the ability to work out someone’s identity by piecing together different bits of information supplied by several features of the story or across articles or news outlets (the same can apply to adults, e.g., in cases where victims of sexual offences enjoy anonymity by law). The Code (Clause 3.2) requires publishers to consider using techniques or practices that remove identifying data (e.g., the area of a city where they live, their parents’ occupations or other unusual details that could lead to a child’s identification). This practice also helps publishers comply with minimum use requirements under data protection law.

Another significant change concerns the provisions on discrimination under Clause 4. The previous version of the Code stated that publishers would be found in breach if they incited hatred “against any group … [on any] characteristic that makes that group vulnerable to discrimination”. This reflected the legal standard under UK law, but it was not adequately enforced, particularly online. The revised Code holds publishers to stricter standards. Clause 4.3 reads: “Publishers must not encourage hatred or abuse against any group” based on those characteristics (emphasis added). The new wording lowers the threshold for what IMPRESS regards as discriminatory coverage and takes into account its potential effect not just on the communities, but on the society as a whole. This change, according to IMPRESS’ Deputy Chief Executive Lexie Kirkconnell-Kawana: “accounts for prejudice that could be more insidious and be more cumulative or more thematic, and not a direct call to action or violence against a group of people – because that’s an incredibly high threshold, and it’s not often how news is carried. You don’t see headlines saying […] ‘Take up arms against x group’.”

Clause 7 on privacy highlights that, when determining the privacy status of the information, publishers must give “due consideration to online privacy settings” (Clause 7.2(b)). Public interest justifications may, however, apply. The provision challenges the widely held misconception that information found or posted online is automatically made public or free to use. The Guidance to the Code acknowledges that an individual’s expectation of privacy may be weaker where no privacy settings are in place but clarifies that the absence of privacy settings will not necessarily prevent a breach of this Clause. It does not automatically mean that an individual consents to publishers or journalists publishing their content, which may reach an entirely different – or even wider – audience than the audience usually viewing the content on that individual’s account (paragraphs 7.1.4 and 7.2.6 of the Guidance to the Code).

Editorial responsibility and accountability with an outlook to the future

The new Code is the outcome of an intensive two-year review process, which involved consultation with academics, journalists, members of the public and industry stakeholders. Richard Ayre, Chair of IMPRESS, stated: “With more news, more sources, more publishers, more opinions than ever before, the opportunities for journalism are limitless. But nothing’s easier for a journalist to lose than public trust. This new Code sets the highest ethical standards for IMPRESS publishers, large and small, and whatever their point of view, so the public can confidently engage with the news of today, and tomorrow.”


This article was first published on the IRIS Merlin legal database. The original piece can be viewed here.

OFCOM Reports on Its First Year of Video-Sharing Platform Regulation

By Dr. Alexandros Antoniou, Lecturer in Media Law, University of Essex

Ofcom, the UK’s communications regulator, published its first report on video-sharing platforms (VSPs) since becoming the statutory regulator for such platforms established in the UK. This is the first of its kind under the VSP regime and reveals information previously unpublished by in-scope regulated companies.

Platforms’ compliance with the new VSP regime

Ofcom’s report outlines the regulator’s key outcomes from the first year of regulation (October 2021 to October 2022). Its findings stem from the use of the regulator’s statutory powers under section 368Z10(3) of the Communications Act 2003 to issue enforceable information requests to all notified VSPs.

Specifically, some platforms made positive changes to their systems and processes in light of new VSP requirements, e.g., TikTok’s dedicated online safety committee to provide oversight of content and safety compliance, Snapchat’s parental control feature, and OnlyFans’ age assurance tools for all new UK subscribers. However, Ofcom found that platforms provided limited evidence of how well their user safety measures operate, blurring as a result their effectiveness and consistency. It also emerged that some platforms are not adequately resourced, equipped and prepared for regulation. There is a clear need for some of them to improve the quality of their responses to the regulator’s information requests. Moreover, Ofcom found that risk assessment processes were not prioritised by platforms, despite their importance in proactively identifying and mitigating safety risks. Risk assessments, however, will be a requirement on all regulated services under future online safety laws that will eventually supersede the VSP regime. Finally, some adult VSPs’ access control measures were not found to be sufficiently robust in preventing children from accessing pornographic content.

Moving towards the second year of the implementation of the regime, Ofcom will dedicate most of its attention to the comprehensiveness of user policies (also known as Community Guidelines), including their application and enforcement; the availability of appropriate tools empowering users to tailor their online experience; and the implementation of suitable age verification (AV) mechanisms to protect children from harmful online content, including pornography.

To increase transparency of platform processes and raise awareness of how VSPs protect against harmful content, Ofcom’s report also sets out the measures adopted by some platforms to protect their users. The following platforms were reviewed in particular: TikTok, Snapchat, Twitch, Vimeo, BitChute, and some smaller VSPs including Fruitlab, ReCast Sport and Thomas Cook, as well as smaller adults VSPs like AdmireMe, FanzWorld and Xpanded. The report explains the governance processes within each regulated service (giving detail on their systems for online safety risk management) and the journey followed by users/ subscribers on each of these platforms.

Additional sets of research

Ofcom also made available a report on the VSP Landscape in the UK, describing the context in which providers apply protection measures. The report offers insights into: (a) who the notified VSP providers are; (b) how many users of VSPs there are in the UK and their demographics; (c) what the main business models used by VSP providers are; and (d) what information VSP providers make publicly available in their transparency reports.

With the aim of building its evidence base around the appropriateness of certain protection measures, Ofcom commissioned further sets of research to understand people’s experiences of using (and attitudes towards) safety measures on VSPs. The research explored a range of users’ perspectives, from parents (or carers) of children aged 6-17 to users of porn platforms.

More specifically, the VSP Parental Guidance Research looked at parents’ attitudes towards children’s online behaviours. In summary, it found that parents tended to perceive VSPs generally as having a constant and unregulated stream of content. Based on their current understanding and the information available to them, six in ten parents said they did not use parental controls on the VSPs that their child uses, because their child “did not need them”. Just over half of parents remembered seeing or receiving guidance on how to keep their child safe online from multiple sources (government websites being the most trusted). However, the study revealed that the process of finding information on online safety was described by many parents as overwhelming and often only prompted by a specific incident (e.g., school guidance, discovering their child was looking at inappropriate content). Parents were also appreciative of safety guidance from VSPs that was clear, digestible, accessible, and easy to understand.

An additional set of research, i.e., Adult Users’ Attitudes to Age-Verification (AV) on Adult Sites, found that, although there was broad support from adult participants for age assurance measures to prevent under-18s from accessing online pornography, UK adult sites were not doing enough to protect children. The biggest adult video-sharing site, OnlyFans, introduced new age verification in response to regulation (using third-party tools) but smaller sites based in the UK did not have sufficiently robust access control measures. Subscriber sign-on processes show that smaller UK-established adult VSPs have AV measures in place when users sign up to post content, but users can generally access adult content simply by self-declaring that they are over 18. Ofcom’s research showed that 81% of participants accepted AV measures where these were expected in general (e.g., whilst purchasing alcohol online or participating in online gambling). A similar proportion (80%) felt Internet users should be required to verify their age when accessing pornography online, especially on dedicated adult sites. The use of a credit card was the preferred means of AV for paid access to pornography. Serious concerns were expressed by participants about how user data might be processed and stored during AV processes to access pornography, reflecting a very low level of trust in the data privacy practices of adult sites.

These findings will inform Ofcom’s regulation of VSPs, including the rules on the protection of children, and its engagement with notified providers.


This article was first published on the IRIS Merlin legal database. The original pieces can be viewed here.