Navigating Trade Mark Protection in the Digital Word

Photo by Kristian Egelund on Unsplash

By Laia Montserrat Chávez González and Renata Alejandra Medina Sánchez

The digital age has given businesses the opportunity to expand and establish their trade marks worldwide. As a result, they have been able to make themselves known to more consumers and increase their profits. A well-known trade mark has the power to create great economic value for businesses and strengthen a company’s branding in the marketplace.

With WhatsApp, Facebook, and Instagram dominating the social media landscape in the United Kingdom, brands are eager to bolster their presence on these platforms to effectively promote their products or services. However, the advent of e-commerce and social media platforms and the emergence of new technologies have created novel challenges for trade mark protection.

One of these challenges is trade mark infringement, which takes place when someone violates the exclusive rights attached to a registered trade mark without the owner’s authorisation or licence. In this evolving digital landscape, businesses must swiftly adapt and deploy suitable protection strategies to safeguard their trade mark rights online.

This blog post aims to elucidate trade mark infringement on social media for entrepreneurs, while outlining available protective measures within these platforms.

The importance of trade marks in the online business landscape

Social media platforms such as Facebook offer a highly promising opportunity for businesses to communicate with their audience in a fast and direct manner. Several studies have highlighted that these platforms are efficient channels to manage communication with customers and reach a broader audience, not only as a publicity channel but also as a way to attract new clients and recruit employees. In this sense, social media is a valuable tool for preserving and boosting brand reputation at minimal cost.

Brands build relationships with their customer base through offline methods such as offering personalised services, hosting events, and running loyalty programmes, as well as online methods including engaging on social media, utilising email marketing, fostering online communities, collecting, and acting on feedback, and providing responsive customer support. The real-time and multi-directional nature of social media facilitates communication and content usage, challenging offline communication models, such as radio and television advertisements. Social media now allows consumers to actively participate by sharing opinions and information, influencing brand perception.

To distinguish themselves and ensure their unique identity is recognised and protected across all platforms, brands rely on trade marks. Trade marks have the same role in the online business realm as they do in traditional markets: to differentiate goods and services. Therefore, their legal protection is crucial, not just to prevent consumer confusion but also to safeguard a business’ reputation.

In 2009, L’Oréal SA, a multinational cosmetics and beauty company, and other luxury brands filed a lawsuit against eBay for allowing the sale of counterfeit products bearing their trade marks on its platform. This example highlights the challenges that e-commerce platforms face in monitoring and preventing trade mark infringement by third-party sellers and emphasises the need for robust monitoring systems and cooperation between online platforms and brand owners to maintain consumer trust.

Understanding trade marks: some fundamental concepts

Entrepreneurs must grasp fundamental concepts associated with trade marks and their use in order to protect their brands from infringement on social media and maintain brand integrity in a competitive online marketplace. This section briefly outlines the importance and relevance of these concepts for trade marks.

Trade mark: A trade mark is any sign capable of being represented in a manner which enables the registrar and other competent authorities and the public to determine the clear and precise subject matter of the protection afforded to the proprietor, and of distinguishing goods or services of one undertaking from those of other undertakings. It may consist of words (including personal names), designs, letters, numerals or the shape of goods or their packaging.


Domain Names: Trade marks and domain names intersect in protecting brand identity online; while a trade mark provides rights to a brand name, securing a corresponding domain name, namely the unique address that people use to find the website on the internet, helps ensure exclusive use and prevents ‘cybersquatting’, i.e., the unauthorised registration of domain names resembling trade marked names.


Usernames: Trade marks extend to social media usernames, as usernames often serve as digital representations of trade marks. A username on social media can enhance brand recognition but might not guarantee trade mark protection.


Terms and conditions: They are legal agreements between a service provider and its client, which set the obligations and responsibilities of the parties. Platforms’ terms and conditions may restrict or regulate trade mark use to prevent confusion or misuse, ensuring brand integrity and user trust.


Branding: By integrating various elements such as logo, design, and mission, branding refers to the process of developing a positive image of a company or its products in customers’ minds. This is achieved by ensuring a consistent theme across all marketing channels, including social media, maintaining consistency in trade mark use, and adhering to trade mark laws.

Trade mark infringement

Infringement occurs when someone uses a trade mark that is substantially similar to a registered trade mark owned by another person, for products or services that are the same or similar to those covered by the registered trade mark. The following represent some common ways in which infringement can occur on social media:

Jacked usernames:

This refers to social media or online account usernames that are ‘hijacked’ by someone other than the trade mark owner, often to exploit a well-known trade mark’s reputation or value. This unauthorised use can mislead consumers and harm a brand. For instance, this could happen if someone other than Nike Inc. registers the username ‘nikeofficial’ on a social media platform. This unauthorised use could confuse consumers into thinking that the account is the official representation of the Nike brand, potentially infringing on Nike’s trade mark rights.

Hashtag hijacking:

Using a trade marked name or slogan as a hashtag without permission can be particularly problematic, especially if the hashtag is used in a way that could confuse consumers or dilute the brand’s identity. Using trade marked terms in hashtags accompanying social media posts should be avoided, unless the explicit permission from the trade mark owner has been obtained.

As a general rule, using a protected trade mark in a hashtag can risk infringement if it implies sponsorship, association, or endorsement by the trade mark owner. However, if the hashtag simply promotes the user’s own goods or services, indicating compatibility or common origin, it may be considered permissible.

Advertising and trade mark use:

Incorporating trade marks into social media advertising initiatives is very important for businesses. However, according to consumer law and domestic legislation applicable in each case, trade mark owners must ensure that advertisements adhere to trade mark laws, steering clear of any practices that could be deemed misleading or deceptive.

Claims made in social media ads must be substantiated, and the use of trade marks must not create a false impression about the product or service. Misleading use of trade marks can result in regulatory action, fines, and damage a business’ reputation. Moreover, social media ads employing registered trade marks must not suggest affiliations or product characteristics that are not true. For instance, implying that a product has certain qualities or is endorsed by a trade mark owner when it is not can be considered deceptive.

Misleading influencer partnerships:

Content creators, such as popular bloggers, online streamers, celebrities, social media personalities, have the power to influence customer’s buying behaviour. Including influencers in promoting products or services might seem like a good idea for businesses looking to expand their audience. Nevertheless, these collaborations should not involve making false claims about the benefits and effectiveness of what is being promoted.

In addition to the potential for misleading claims, influencers’ promotional content on social media has the potential to amount to trade mark infringement. When influencers use logos, brand names, or other trade marked elements without proper authorisation, they may inadvertently or deliberately create confusion about the source or sponsorship of the products or services being promoted. Such unauthorised uses, if not properly unmonitored, can mislead consumers into believing that there is an official partnership between the influencer and the trade mark owner, which might not exist. Such actions can dilute the brand’s identity and value, potentially resulting in legal disputes and damaging the reputation of both the influencer and the involved businesses.

Confusing similarity:

Using a sign that closely resembles an existing registered trade mark in a way that could confuse consumers could constitute infringement. Such confusion can arise from similar logos, names, or products/services offered under those trade marks, potentially leading consumers to mistake one for the other.

Consider, for instance, a scenario where a tech start-up called ‘AppLinx’ creates a logo closely resembling Apple’s iconic bitten apple and uses a name like ‘iLinx’ to promote its mobile app development services on social media. Users browsing their feed might mistake ‘iLinx’ for an Apple-affiliated service, potentially leading to trade mark infringement issues and confusion among consumers about the origin of the app development services.

Domain name infringement:

It should be remembered that trade marks represent intellectual property rights protecting brands and their associated products or services, but domain names are addresses used to access websites on the internet.

Domain name infringement can occur by registering a domain name that is deemed to be identical or confusingly similar to another party’s trade marked name or brand, known as ‘cybersquatting’. Such similarity can lead to confusion among consumers, potentially diverting traffic away from the rightful owner’s website or causing harm to their reputation.

Take the example of a reputable company, ‘XYZ Clothing,’ which owns the trade mark and domain name ‘XYZClothing.com.’ If another party registers the domain ‘XYZClothing.net’ and uses it to sell counterfeit goods, customers searching for ‘XYZ Clothing’ might stumble upon the ‘.net’ website, purchase lower quality products, and have a negative experience. This confusion and association of poor quality can damage XYZ Clothing’s reputation and lead to a loss of trust among consumers.

In the 2003 case of Harrods Limited v. Pete Lormer (WIPO Case No. D2003-0504), an American individual named Pete Lormer registered the domain name ‘www.harods.com’ which closely resembled the HARRODS registered trade mark. Users entering ‘www.harods.com’ were redirected to ’www.expedia.com’, suggesting a false sense of origin or sponsorship for any associated products, goods, or services. As a result, the Panel of the WIPO Arbitration and Mediation Center concluded that Lormer had registered and used the domain in bad faith, intending to exploit the HARRODS trade mark for commercial gain. Consequently, the Panel ordered the transfer of ‘harods.com’ to Harrods Limited.

Parody accounts:

Another issue that deserves attention is parody accounts mimicking the style of a well-known brand without clearly labelling themselves as satire. It involves social media profiles or online personas that use the likeness of a person, group, or organisation in their profile to discuss, satirise, or share information about that entity. Although this might be considered legal when the account name and profile clearly indicate that it is not affiliated with the original entity, using terms like ‘parody’, ‘fake’ or ‘fan’, can cross legal boundaries when they engage in trade mark infringement, impersonation, or deceptive practices when they mislead users.

One notable example involved a spoof Twitter account, @UKJCP, which operated under the name ‘UKJobCentrePlus not’ and had adapted the job placement body’s official logo. The account mocked Jobcentre Plus and welfare policies, attracting the then Conservative government’s fury. The Department for Work and Pensions (DWP) complained to Twitter that the account was set up ‘with a malicious intent’ to undermine the work of Jobcentre Plus, but Twitter (now X), after initially suspending the account, eventually restored it because it allowed at the time parody accounts as long as they were clearly labelled as such.

Ensuring trade mark integrity amidst digital challenges

In conclusion, the digital landscape offers both vast opportunities and significant challenges for trade mark protection. Navigating trade mark protection in today’s digital age requires a deep understanding of the evolving dynamics of social media and e-commerce platforms. As brands increasingly engage with their audience on platforms like Facebook and Instagram, the risk of trade mark infringement and misleading advertising also increases. Therefore, businesses must implement rigorous measures to protect their trade marks. These measures may include monitoring for unauthorised use, ensuring transparent endorsement disclosures, and working with social media platforms to enforce policies effectively.

To preserve consumer trust and protect brand integrity in today’s competitive digital marketplace, businesses must prioritise transparency and implement robust trade mark protection strategies. A proactive approach to trade mark protection empowers brands, ensuring their sustained success and reputation. At the same time, the active involvement of social media platforms in developing and enforcing trade mark protection policies is essential in enhancing enforcement against trade mark infringement.

About the authors

Laia Montserrat Chávez González is currently in her final semester for a double degree in Law and Economics at Tec de Monterrey in Mexico, and in parallel pursuing an LLM in International Commercial and Business Law at the University of Essex. Laia has advised national and international clients on trade mark registrations and feasibility studies, and managed administrative procedures with the Mexican Institute of Intellectual Property. Passionate about protecting creativity and innovation, she also oversaw intellectual property aspects in transactions, ensuring compliance across different legal systems and facilitating trade mark rights transfers.

Renata Alejandra Medina Sánchez is a lawyer who graduated from the Pontifical Catholic University of Ecuador, with a Senior Specialization in Business Law from the Universidad Andina Simón Bolívar, and a Master’s in Contemporary Contracting from the Universidad Externado de Colombia. She is currently pursuing an LLM in Corporate Social Responsibility and Business at the University of Essex. She holds extensive experience in corporate, contractual, labour, tax, and commercial law. Throughout her career, she has collaborated with both domestic and foreign companies, assisting them from their establishment to the expansion of their operations. Her expertise encompasses involvement in merger and acquisition processes, as well as the drafting and negotiation of contracts.

How Harry Styles’ stalking incident highlights the boundaries of celebrity worship

Image via Wikimedia Commons

A later version of this article was first published by The Conversation on 2 May 2024 and can be read here.

By Alexandros Antoniou, Essex Law School

In our digitally interconnected world, the allure of Hollywood and music sensations captivates millions, drawing admirers into the intimate orbit of their idols. Falling under the spell of a celebrity crush is a common aspect of adolescent development, but today’s heightened accessibility can foster a dangerous sense of entitlement among fans.

The recent conviction of Harry Styles’ stalker, who inundated him with 8,000 cards in under a month, vividly illustrates the alarming consequences of overstepping boundaries in the perceived intimacy between fans and celebrities. Notably, journalist Emily Maitlis, The Crown actress Claire Foy, and TV presenter Jeremy Vine have all experienced similar stalking incidents.

A range of audience engagement

We connect to media figures in different ways, from deeply empathising with a cherished character’s experiences to feeling a sense of closeness with TV hosts who become a familiar presence in our lives. For example, a beloved TV character’s joys and sorrows might deeply resonate with audiences, leading to shared emotional experiences.

Sometimes we immerse ourselves in a character’s narrative to the extent that their joys and sorrows become intimately felt experiences (e.g., a deep sense of sadness when a beloved TV character undergoes a loss), regardless of their disparate backgrounds or life journeys.

Repeated exposure and personal disclosures from media personalities can create a sense of closeness in viewers, despite the lack of direct interaction, as when a TV host becomes a familiar presence in our daily lives. These connections, known as parasocial relationships, thrive on perceived intimacy but lack reciprocity.

Fandom, marked by intense admiration, elevates parasocial relationships to pedestals and becomes deeply ingrained in one’s identity. This devotion can extend beyond individual characters to entire shows or franchises, manifesting in activities like collecting merchandise and engaging with online fan communities.

Our ties to fictional characters, the actors embodying them, and influential media figures vary but collectively form a spectrum of audience involvement. This intricate web of seemingly harmless bonds can morph into toxic obsessions, as seen in the case of Emily Maitlis’ stalker, whose “unrequited love” for the former news anchor led to repeated breaches of a restraining order.

However, it is not merely a gradual escalation of these connections; rather, individuals (possibly battling mental health challenges) may harbour various motivations ranging from vengeance, retribution, and loneliness to resentment, a yearning for reconciliation, or a quest for control. They may hold delusions, such as “erotomania,” believing someone loves them and will eventually reciprocate. Their behaviour might stem from an obsessive fixation on a specific cause or issue.

In the complex realm of fandom culture, the law starts by recognising that beneath the celebrity veneer of flawless posts and red-carpet appearances lies a real person with vulnerabilities. Like everyone, they too deserve a zone of privacy which comprises different layers of protection.

The sanctum core

Picture your life as a mansion, with each room symbolising different facets: thoughts, emotions and personal endeavours. Encircling this mansion is a protective perimeter of a privacy zone, shielding specific aspects of your life from unwanted intrusion, be it by strangers, acquaintances, or the government. Maintaining the integrity of these restricted areas is left to a mixed legal environment encompassing civil remedies and criminal offences, including racially or religiously aggravated variants.

Secretly monitoring someone’s activities or lingering around their home without valid cause gravely endangers this zone. Claire Foy’s stalker, who had become “infatuated” with the actress, received a stalking protection order after appearing uninvited at her doorstep, leaving her “scared” of her doorbell ringing and feeling “helpless” in her own home. Sending unsolicited “gifts” is also associated with stalking, as demonstrated by Styles’ relentless pursuer who sent countless unsettling letters and hand-delivered two to the singer’s address, causing “serious alarm or distress”.

An intimate ecosystem

Importantly, the mansion’s private enclave embodies more than an inner sanctuary where people can live autonomously while shutting out the external world. Our private sphere also safeguards our personal growth and ability to nurture relationships, constituting a “private social life.”

When stalking rises to the level of inducing fear of violence or has a “substantial adverse effect” on someone’s regular activities, e.g., forcing a celebrity to make significant changes to their lifestyle, the law steps in to protect victims, including innocent bystanders who might experience direct intrusion themselves.

For example, Emily Maitlis’ stalker showed “breath-taking persistence” in contacting his victim and her mother, while Foy’s stalker had emailed the actress’ sister and texted her ex-boyfriend. Such conduct warrants legal intervention because it can severely impair someone’s ability to freely establish normal social networks and ultimately increases isolation, amplifying the disruptive impact on their support systems.

Advancements in communications technology have driven the surge in “cyberstalking”. For example, presenter Jeremy Vine’s stalker “weaponised the internet”, sending relentless emails identifying his home address and instilling fear for his family’s safety. Such digital variations of traditional stalking might also be pursued through communications offences, including the newly enacted “threatening communications” offence.

FOUR indicators

Behaviours may vary but they frequently exhibit a consistent pattern of Fixated, Obsessive, Unwanted and Repeated (FOUR) actions, violating not only a person’s inner circle privacy zone but also the outer sphere of their private social life.

While rooted in natural admiration for talent and charisma, celebrity worship can blur the line between harmless adoration and harmful obsession, particularly in an age dominated by social media that gives unprecedented access to our favourite stars. Legal boundaries delineate genuine appreciation from repetitive, oppressive conduct that jeopardises someone else’s well-being.

The Anatomy of Impact: A Conversation with Professor Lorna Woods

Photo by Joshua Hoehne on Unsplash

By Professor Carla Ferstman, Director of Impact, Essex Law School

As academics, we conduct research for all sorts of reasons. We seek to advance knowledge and innovation in the areas in which we specialise, and we try to make connections with research being done in other disciplines for the purpose of enhancing our understanding of and contributing to address cross-cutting, complex challenges.

Academic research is increasingly being applied outside of academia to foster external impacts in our communities and societies. Research-led teaching can also foster the opportunities for cutting-edge, student learning.

The UK Research Excellence Framework values world-leading research that is rigorous, significant and original. It also encourages and rewards research that generates impact, which it understands as “an effect on, change, or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia” (REF2021).

Impactful research is particularly relevant and important for the discipline of law, where colleagues’ work can lead to changes in how justice is perceived and how access to justice can be better achieved. Academic research in law has led to and influenced the direction of law reform and academic findings have also been applied authoritatively in court judgments. Legal research has also led to the development of new policies, and regulatory frameworks in the UK and internationally.

Despite the importance many legal academics place on generating impact, the route to impact is not obvious. Achieving impactful academic research defies a one-size-fits-all formula, though certain key pointers are invaluable:

First, impactful research is generated by academics who produce excellent, groundbreaking research.

Second, academics should be mindful of who (e.g., community stakeholders, policy-makers, decision-makers) would benefit from knowing about the research and should develop a strategy to ensure they effectively disseminate their findings.

Third, academics seeking to generate impactful research should be actively engaging with those who can benefit from their research, adapting their approach based on stakeholder needs and circumstances.  

Learning from example

Academics can glean wisdom from exemplary models. And there is no better example than Professor Lorna Woods, whose research contributed significantly to the Online Safety Bill (now Online Safety Act 2023) and led to her being awarded an OBE for services to internet safety policy.

I sat down with Professor Woods to get a clearer understanding of her trajectory – how she got from A to B to C (or indeed, from B to A to F to C), to better appreciate the time her ideas took to percolate and the challenges she faced along the way.

I wanted to understand whether her research was picked up by government by happenstance, by carefully, plodded planning, or some other combination. I also wanted to know whether there was any magic formula she could share to generating impactful research.

Lorna qualified as a solicitor and worked in the early 1990s for a London city firm, where she was exposed to a variety of areas of law, including international trade, competition, and commercial law. She began to work with two of the partners on matters involving regulation, intellectual property, and media. She happened to be at the firm when many developments  in the law occurred, such as the Broadcasting Act 1990, up-dates in data protection rules, and other changes as a result of growing public access to the internet.

This quickly developed into a specialism related to technology. “The work was really interesting. It wasn’t just the typical due diligence or deals management work that one often received in a corporate solicitor’s firm, there was a space to think and a space to have your say”.

Also, during this time, Lorna did some consulting work for the European Commission in Eastern European countries following the political changes in the early 1990s, focused on media freedom and public service broadcasting, which involved new thinking about the rights of the public audience that had not yet been theorised.

Lorna left the firm after about five years when, as often happens, she began to take on a more supervisory role, with some of the most interesting pieces of work being delegated to more junior colleagues. She pursued an LL.M degree at the University of Edinburgh (legal theory and human rights, with a dissertation on federalism and the European Union) and began to apply for academic roles. She secured a position in 1994 at Sheffield and began teaching EU and public law.

The Eureka moment or more of a slow-burner?

Gradually Lorna’s research began to drift back to media law and data protection, incorporating areas she had been studying around human rights, public speech, surveillance, and the rights of journalists, but with her own take. She recalled that “A lot of people were talking about journalists’ rights, but I was focussed on the rights of the companies who were transmitting; an ‘essential facilities’ argument but approached from a rights perspective. I also started looking at these issues from the perspectives of EU law and the free movement of cultural standards [the rights of the audience] rather than simply as an issue of freedom of expression.”

Central to this was the idea that there were different actors in an information environment – the speakers and the audience, and something in the middle which had more to do with the platform, that is not really seen or thought about. The question Lorna had was whether these entailed separate rights or were all part of a unified right to information.

In 2000, Lorna was collaborating with Professor Jackie Harrison at Sheffield and they began researching new media and media regulation, and again, this is where she conceptualised further her thoughts on the rights of the audience not only to have access to information, but to information that was reasonably reliable, and where possible, to a diversity and plurality of sources.

This also connected to her thinking about how to find information on the internet, who curates what we can find and what responsibilities may be attached to the curation. The flip side to this was considering the nature of states’ positive obligations to provide a safe online environment. Lorna also began to explore issues around usergenerated content.

In response to the growing awareness of how female politicians and activists were being targeted on Twitter (now X), and the notoriety of the abuse faced by Caroline Criado Perez and Walthamstow MP Stella Creasy, Lorna started looking at what controls were in place, and began to consider the gaps in regulation and how they could best be addressed.

At the time, she observed that politicians had embraced Twitter, amplifying their influence while also making them more accessible and exposed. The platform facilitated direct communications between everyone on the network, including with unsavoury individuals who were using the platform as a form of abuse. This was fuelled by anonymous accounts, hashtags that allow you to jump on the bandwagon, and little seeming moderation at that stage. There were many instances of public-facing women receiving rape and death threats.

In consequence, there were several instances in which users were being charged in the UK under section 127 of the Communications Act – a low-grade offence which criminalises the sending, via a “public electronic communications network”, of a message which is “grossly offensive or of an indecent, obscene or menacing character”. But it was never clear to Lorna that using the criminal law was the best solution to the problem.

The campaign for law reform begins to take shape

Around 2015, Lorna became aware that the then Labour MP Anna Turley MP was developing a private member’s bill:  the Malicious Communications (Social Media) Bill. Someone whom Lorna had met in an unrelated capacity – “this is just really a feature of when you work in a certain area, you meet people linked to that area. And progressively, your army of contacts comes back to help” – William Perrin, managed to get her in the door to meet the MP.

Together, Lorna and William helped to draft the Bill. The goal was to give users better tools (user empowerment features and functionalities) so that they could filter and triage incoming content, at least as a starting point for improving the online environment. Their advice (which was taken on board) was not to remove platform immunity for third-party content; they recognised that the platform providers were offering an important service worth protecting.

Part of the rationale for this was the connections they saw between internet platform providers and telecoms providers: “If you were to hold a telecoms provider responsible for anything communicated on the service, they would become very cautious and ultimately it would shut down the service.  So, there was a need for caution.” Ultimately the Bill did not progress because private members’ bills rarely do but they operate to bring matters to the attention of the Government and can be part of a campaign for change.

Subsequently, the Government published a Green Paper on internet safety in 2017, where significant concerns were raised. This was the era of Cambridge Analytica and misinformation, but there were also concerns about child pornography and online bullying, and the algorithms prioritising content to vulnerable users stemming from the tragic Molly Russell case.  The Green Paper seemed to revisit the recommendation to remove (or significantly restrict) platform immunity for third-party content, which Lorna and William did not think was the best approach, for the reasons already stated.

There was a need to conceive of the problem at the systems level, rather than merely focusing on isolated items of content. For example, the scale of the problem invariably was not about the individual offensive posts but that the content was quickly able to go viral without appropriate controls, aided by functions like the “like” button, and the availability of anonymous, disposable accounts.

Similarly, the recommender algorithm which optimised certain posts for engagement tended to privilege the most irrational, emotional posts which were more likely to promote hatred or cause offence. Making small changes to these kinds of features and investing more in customer response, could significantly improve online safety.  Thus, according to Lorna, there was a certain recklessness in the product design that needed to be addressed – this was the genesis of the idea of a statutory duty of care. 

Paws for thought: remembering Faith, Lorna’s beloved cat who ‘Zoom-bombed’ video calls during lockdown and contributed much to debates on online safety

The statutory duty of care

Lorna and William produced a series of blogs and papers outlining this position, and the need for such reforms was also underscored by Lorna during an oral evidence session at the House of Lords inquiry into the regulation of the internet. The Carnegie UK Trust stepped up to champion Lorna and William’s work, facilitating its progress.

The UK Department for Culture, Media and Sport (DCMS) invited Lorna to give a briefing, and it became clear that there was some confusion. The DCMS had been under the impression that the conditionality of the platform immunity amounted to a statutory duty of care. Consequently, part of what Lorna and Will tried to explain was how their proposal was compatible with the principle of platform or intermediary immunity. The proposal was not seeking to impose liability on the platform for user content but instead, focused on requiring platforms to ensure product design met their duty of care to users. These discussions with DCMS continued, and progressively intensified.

The White Paper which was ultimately released in April 2019 clearly articulated that “The government will establish a new statutory duty of care to make companies take more responsibility for the safety of their users and tackle harm caused by content or activity on their services,” and outlined what that duty of care would look like and how it would be regulated.  

Changes within the Tory leadership ultimately delayed progress. There were also concerns raised by some of those in the free speech lobby who saw parts of what was being proposed as censorship.  Lorna’s background in freedom of speech helped her respond to those concerns: “I was concerned that freedom of speech was being used as a slogan. When you look at any right and you look at it in isolation, you are then implicitly privileging it. And here, it was important not just to consider the rights of the ‘speaker’ but the rights of all the other users as well, some of whom are extremely vulnerable.” 

These points align with what the UN Special Rapporteur on Freedom of Opinion and Expression explained in her 2023 report on gendered disinformation, who notes, citing Lorna’s submission, that “Systemic regulation, which emphasizes ‘architecture over takedown’, allows for more proportionate responses and is likely to be better aligned with freedom of expression standards.”

Certainly, companies were lobbying in other directions and the Act reflects some corporate compromises, such as the need for the duty of care to be applied proportionately, to account for the different levels of resources of the regulated company. But there were powerful counter-arguments, and the NSPCC and other organisations were effective allies particularly on the need for clear duties of care in relation to child users. The Daily Telegraph also ran an important campaign on the legislation. The Government at one point sought to restrict the Act to concerns about children, so this became part of the campaign to maintain a focus also on harm to adults (unfortunately only limited protections were maintained). There are other parts of the Act which differ from what Lorna and William had proposed, such as dividing up the regulatory framework by reference to certain types of conduct. Inevitably there were compromises.

The Act as adopted envisages that the communications regulator Ofcom will produce guidance and codes which will explain what internet platforms must do in order to operate in the United Kingdom. There are ongoing consultations regarding these texts. Once the guidance and codes are in place, companies will be given a period (three months) to align their practice to comply with the requirements. Thereafter, the duties of care will become binding.

Some of the companies appear to be arguing that a duty of care is too vague a standard, however this is hard to accept, given that it is a recognised legal standard. The goal for Lorna and others is therefore to ensure that the duty of care standard is made operational in such a way that it provides clear and adequate protections; it should be more than a ‘tick the box’ exercise.

I asked Lorna how this legislation would tackle the activities of companies operating outside of the UK, but with impacts in the UK. She explained that parts of the Act have extraterritorial effect, to the extent that company activities are directed at or have impacts in the UK. Some companies have introduced policies for different geographical regions to address the requirements of national legislation, so this is a possibility for multinational internet platforms accessible to UK users.  

I also discussed with Lorna whether she believed individuals like Molly Russell would be more effectively safeguarded now that the Online Safety Act is in force. She explained that Molly would not be better off today, because the guidance and codes are not yet in place. “Maybe in a year’s time, she would probably be better protected, as a child. I think an 18-year-old Molly would be sadly let down by the regime, which should be more robust.”

Given the clear synergies with her work on the Act, Lorna is also progressing with work on online gender-based violence, and some work on gender-misinformation, incel and extremism. As she looks deeper into these critical areas, it becomes evident that her ongoing endeavours reveal new challenges and fresh avenues for advocacy and change.

New communications offences enacted by the Online Safety Act 2023

Photo by Ravi Sharma on Unsplash

Dr. Alexandros Antoniou, Essex Law School

The Online Safety Act 2023 (OSA) introduced a range of measures intended to improve online safety in the UK, including duties on internet platforms about having systems and processes in place to manage illegal and harmful content on their sites. On 31 January 2024, Part 10 of the Act came into effect, introducing a series of new criminal offences which represent a significant leap forward in tackling complex challenges surrounding online communications safety.

Section 179 of the OSA establishes the criminal offence of sending false communications and seeks to target, among others, internet trolls. It is now deemed an offence if an individual (a) sends a message containing knowingly false information; (b) intends, at the time of sending, to cause non-trivial psychological or physical harm to a likely audience; and (c) lacks a reasonable excuse for sending the message. Recognised news publishers and broadcasters are exempt. The offence does not apply to public screenings of cinema films either. It can be committed by individuals outside the UK if they are habitually resident in England, Wales, or Northern Ireland. Penalties include imprisonment for up to six months, a fine, or both. It is hoped the new offence will help clamp down on disinformation and election interference online.

Section 181 establishes the criminal offence of sending threatening communications. This is committed when an individual sends a message containing a threat of death, serious harm (e.g. bodily injury, rape, assault by penetration), or serious financial loss, with the intent to instil fear in the recipient that the threat will be carried out (whether by the sender or someone else). In cases of threats involving financial loss, a defence is available if the threat was used to support a reasonable demand, and the sender reasonably believed it was an appropriate way to reinforce that demand. This offence applies to individuals residing in England, Wales, or Northern Ireland, even if the sender is located outside the UK. Penalties include up to five years of imprisonment, a fine, or both. In March 2024, Essex law enforcement achieved a significant milestone by obtaining one of the earliest convictions under the new OSA, resulting in an eight-month jail sentence for Karn Statham. Statham harassed a woman by sending threatening messages and making repeated visits to her address after being instructed to cease contact.

A new criminal offence under section 183, dubbed “Zach’s law”, aims to protect people from “epilepsy trolling”. The campaign against such conduct began when eight-year-old Zach, who has epilepsy, was raising funds for the Epilepsy Society. Trolls inundated the Society’s profile with images and GIFs meant to induce seizures in people with epilepsy. While Zach was unharmed, others with the condition reported seizures after engaging with the fundraiser online. The Act creates the offence of deliberately sending or showing flashing images to individuals with epilepsy with the intent to cause harm, defined as inducing a seizure, alarm, or distress. Particular conditions (specified in the Act) must be met before a conviction is secured, both in respect to sending and showing flashing images electronically. Recognised news publishers, broadcasters, public screenings of cinema films as well as healthcare professionals cannot be guilty of this offence (which can similarly be committed by individuals outside the UK if they are habitually resident in England, Wales, or Northern Ireland). Penalties include imprisonment for up to five years, a fine, or both.

Moreover, section 184 outlaws encouraging or assisting serious self-harm. To be guilty of this offence, an individual must perform an act intended to encourage or assist serious self-harm in another person, whether through direct communication, publication or sending (or giving) items with stored electronic data. Serious self-harm encompasses actions leading to grievous bodily harm, including acts of omission such as encouraging someone to neglect their health regimen. The identity of the person harmed need not be known to the offender. The offence can occur regardless of whether self-harm is carried out and it is irrelevant who created the content in question (it is the sending that matters). The offence is punishable by imprisonment for up to five years, a fine, or both, and likewise, it applies to individuals habitually resident in England, Wales, or Northern Ireland, even if they are outside the UK.

Cyber-flashing on dating apps, AirDrop and other platforms will also result in perpetrators facing up to two years in prison. Section 187 of the Act introduces a new offence under the Sexual Offences Act 2003 pertaining to the sending of photographs or films of a person’s genitals to another individual. A person (A) is deemed to commit the offence if they intentionally send or provide a photo or video of another person’s genitals to another individual (B) under the following conditions: either A intends for B to view the genitals and experience alarm, distress, or humiliation; or A sends or provides such material with the aim of obtaining sexual gratification and is reckless as to whether B will experience alarm, distress, or humiliation. “Sending” covers sending through any means, including electronic methods, showing it to another person, or placing it for someone to find. A conviction for this offence could also lead to inclusion on the sex offenders’ register. In February 2024, an Essex Police team secured the UK’s first cyber-flashing conviction, with Nicholas Hawkes pleading guilty to sending explicit images via WhatsApp to cause distress. On 19 March 2024, Hawkes was sentenced to 66 weeks in prison. He was also made subject to a restraining order for 10 years and a Sexual Harm Prevention Order for 15 years.

Finally, the OSA repeals the legislation first introduced to tackle ‘revenge porn’ offences (sections 33-35 of the Criminal Justice and Courts Act 2015) and introduces a set of intimate image sharing offences. Specifically, section 188 of the OSA introduces a new base offence of sharing of intimate images without consent, carrying a penalty of imprisonment for up to six months. This applies when an individual intentionally shares an image portraying another person in an intimate context without their consent and without a reasonable belief in consent. Two more serious offences are established on top of that, both reflecting the offender’s higher culpability and carrying greater penalties: namely (a) intentionally causing alarm, distress, or humiliation to the person in the image; and (b) seeking sexual gratification from the act (these are outlined in sections 66B(2) and (3) of the Sexual Offences Act 2003). Threatening to share an intimate image of a person has also been made an offence where the perpetrator either intends to cause fear that the threat will be carried out or acts recklessly in doing so (this is found under section 66B(4) of the aforementioned 2003 Act). The new offences also fall under the sexual offender notification requirements. These new intimate image offences are also designed to tackle “deepfakes” and “down-blousing” (i.e. capturing images typically of a person’s chest area, from a downward angle, often without their knowledge or consent). They also come with various exemptions (outlined under section 66C of the Sexual Offences Act 2003), e.g. where the photograph or film involves a child and is of a kind normally shared among family and friends.

While there is some overlap between existing offences, the new offences consolidate previous ones or address gaps. For example, the intimate image sharing offence widens the meaning of the photographs or films, from “private sexual” to “intimate” and makes it easier for those caught sharing such content online without the other person’s consent to be prosecuted, as it removes the requirement for any harm to be intended to the subject of the photograph or film. The updated guidance of the Crown Prosecution Service aims to delineate the appropriate charge for each circumstance. The introduction of the new offences is anticipated to fortify protections against online misconduct.


This article was first published on the IRIS Merlin database and is reproduced here with permission and thanks.

Essex Law School Expert Praised in House of Lords for Work on Online Safety Legislation

Photo by Marcin Nowak on Unsplash

Essex legal expert Lorna Woods has earned special recognition in the House of Lords thanks to her research and work supporting the landmark Online Safety Bill. The Bill successfully passed through Parliament and is now enshrined into law, having received Royal Assent on Wednesday 26 October 2023. The Act makes social media companies keep the internet safe for children and give adults more choice over what they see online.

Professor Woods has helped influence the bill after famously writing some of its founding principles on the back of a sandwich packet with the help of William Perrin, of the charity Carnegie UK, several years ago.

Professor Woods has continued to work with Carnegie throughout the last few years and provided expert advice to backbenchers and members of the House of Lords.

She was personally thanked following the final debate in the Lords by Lord Stevenson for her work on the bill.

Lord Clement-Jones added: “I pay my own tribute to Carnegie UK, especially Will Perrin, Maeve Walsh and Professor Lorna Woods, for having the vision five years ago as to what was possible around the construction of a duty of care and for being by our side throughout the creation of this bill.”

Professor Woods has become a high-profile commentator on the bill throughout its passage on Parliament, and recently recounted the “surreal moment “it was approved by the Lords in an interview with the BBC Online.

In a separate interview with Wired, Professor Woods responded to criticisms of the bill by insisting it would help protect the human rights of children being exploited and abused online.

She was also quoted in the New York Times’ own coverage of the Bill, and has also appeared on BBC Radio Five Live.

Professor Woods said: “The Bill is significant as it marks a move from self-regulation – where service providers decide what is safe design and whether to enforce their community standards – to regulation under which services are accountable for those choices.”


This story was first published on the University of Essex’s news webpages and is reproduced on the ELR Blog with permission and thanks. The story was edited to reflect the fact that the Bill received Royal Assent.

Accountability for Digital Harm Under International Criminal Law: In Conversation With Sarah Zarmsky

Image via Shutterstock

Sarah Zarmsky, PhD Candidate and Assistant Lecturer at the Human Rights Centre, is a recipient of the 2023-2024 Modern Law Review Scholarship for her PhD thesis ‘Accountability for Digital Harm Under International Criminal Law’, supervised by Professor Carla Ferstman (University of Essex) and Dr Daragh Murray (Queen Mary University of London).

Sarah was awarded the Mike Redmayne Scholarship, instituted in memory of past MLR Committee Member Professor Mike Redmayne, which is presented to the best applicant in the fields of Criminal Law and the Law of Evidence (and related fields).

Modern Law Review Scholarships are prestigious awards provided to doctoral researchers in the United Kingdom and are funded by the Modern Law Review. Sarah is the first candidate from the University of Essex to receive the scholarship!

The research visibility team talked to Sarah about her success and took the opportunity to find out more about her plans:

This is an impressive achievement. How does it feel to bring this award to the University of Essex for the first time?

Thank you! It feels great, I’m very proud of it and to be part of such an impressive group of recipients. It’s very rewarding to have your research, that you invest so much hard work in, be recognised by others, especially by a journal as reputable as the Modern Law Review.

Could you tell us a bit more about your research? What gaps or shortcomings have you identified when it comes to addressing digital harm in the context of international criminal law?

My research examines how digital harm with relevance to the perpetration of international crimes may or may not be accommodated within existing international criminal law frameworks. Where criminalization may not be appropriate or feasible, it identifies possible alternatives for obtaining justice for victims of digital harms, such as through corporate criminal liability or regulatory frameworks.

I think the main takeaway so far is that the law has not yet ‘caught up’ with new ways of inflicting harm through technology, and depending on the type of harm, international criminalisation may or may not be the answer. There are some digital harms where we can see a clear link to existing international crimes, such as online hate speech and incitement to genocide, or sharing footage of crimes as an outrage upon personal dignity. It will be harder to accommodate more ‘novel’ types of harms, such as algorithmic harms or digital mass surveillance with ICL as it stands, so I am entering the stage of my research where I explore complementary strategies for achieving justice for victims of those harms.

In a single sentence, how would you summarise the importance of your research when describing it to an undergraduate student?

New technologies are important for advancing accountability for international crimes, but they also create new ways to perpetrate existing crimes or entirely new crimes, so this research is important in laying the foundation for future discussions as to how international criminal law can best accommodate digital harms.

With the evolving nature of digital threats and the global nature of the internet, how can international cooperation and collaboration be fostered to ensure effective accountability mechanisms for digital harm? Are there any notable examples or initiatives you could share that illustrate promising efforts in this area?

This is a complex question, but to answer it briefly, I would stress that as an international community, we need to be recognizing how harmful new technologies can be if used maliciously and that these harms are grave enough to be international crimes. I think sometimes the technology aspect can be roped in with other more ‘traditional’ offences and not treated as crimes on their own, which can result in less tailored justice for victims.

There are some promising developments in domestic war crimes trials, such as in The Netherlands, Germany, and Sweden, where individuals have been convicted and sentenced for war crimes for sharing degrading footage of executions on social media. I think these are positive developments because they serve the expressive function of recognizing how humiliating and degrading it can be to share footage of people in their most vulnerable states, and send a message that this is a serious crime.

This has not yet happened at an international criminal court or tribunal, but with the rise of open-source evidence initiatives at the ICC for example, I think it could definitely be a possibility going forward.

Do you anticipate your research will influence policy and if so, how?

I hope that my research can provide guidance for how ICL lawmakers and practitioners can ensure that the law keeps up with the times to fully address new ways of inflicting harm through technological means. My goal is to bring these issues to light and hopefully spark discussions within the ICL community about how we can account for digital harms moving forward.

Which direction do you see your research going in the future and why?

I’m now entering the third year of my PhD, during which I plan to apply my research thus far to one or two case examples and be able to highlight how the theory might work in practice, which I think will be really valuable. After the PhD, I would like to continue in this realm of ICL, human rights, and new technologies, perhaps delving deeper into one of the specific digital harms with a nexus to international criminal law that I have identified in the thesis.

Unbreakable Shields: Some Tips to Safeguard Your Digital Realm

Image via Shutterstock

By Dr. Audrey Guinchard, Senior Lecturer, Essex Law School

In the vast and interconnected realm of the digital age, our lives have become intrinsically linked to the virtual world. From online banking to social media interactions, our personal and professional activities have found a new home in cyberspace. However, as we embrace the convenience and opportunities offered by the digital revolution, we must also acknowledge the shadows of cyber threats that pose a constant risk to our security.

We live in an era where sophisticated hackers and malicious actors continuously exploit vulnerabilities, seeking to breach our defences and gain unauthorized access to our sensitive information. We have all heard of viruses, ransomwares, phishing attacks, scams… but it’s not always easy to keep on top of best cybersecurity practices in our daily, busy, lives.

Who has never delayed updating their operating systems (OS) for a few days because of the sheer inconvenience of having to stop working and using the digital device for a solid 20 mins?

And what about these annoying passwords? Who never got frustrated when not remembering an obscure combination of letters, numbers and special characters in no logical order? Even the author who recommended this form of password management back in 2003 has regretted his initial advice!

And, how about the apparently preposterous advice of not re-using passwords when one has to remember about 70 to 100 passwords?

The consequences of a successful cyber-attack can be devastating, leading to financial losses, identity theft, and irreparable damage to our digital lives. So,  what is a good starting point for good cybersecurity practices? None is, on its own, fail-proof. It’s their combination that will often delay the attacker who, discouraged, will turn towards easier targets/victims. It’s also about minimising the impact our mistakes may have.

Start with an audit of your practices, so that you know where to begin. The easiest is to answer the questionnaire on the UK National Cybersecurity Centre (NCSC) website: its Cyber Action Plan. It is a truly 1 or 2 min questionnaire; the questions may seem basic but they cut at the heart of the top best practices we can put in place. And follow their detailed advice on what you need to do, advice based on your answers.

Pay particular attention to your passwords. The question to ask oneself is always: if somebody has access to this password, what can they retrieve and find out? Will the password give them access to my bank account? Or to a work account? Or to social media? Or to the three of them?

You can notably check here whether the password has been compromised: or whether the same has happened to your email address here.

You may want to consider a password manager but be aware: your password manager tends to store your data online, so your password data is not immune to hacking, as it happened to the leading company of LastPass in 2022, which won’t be the last.

So, the question is: do you really need this password to be stored online?

For example, if you only do your tax return from home, do you need to save online your password and ID number for HMRC? Because you know that whoever has your HMRC details may well be able to access lots of government services and impersonate you. And ID theft is no fun!

For iPhone users, disable access to the control centre when your phone is locked; otherwise, even with your screen locked, you are giving control of your phone. To do so, go to your Settings, then Face ID (or Touch ID) and passcode, then scroll down to ‘allow access when locked’ (under the voice dial feature) and disable ‘control centre’, ‘accessories’, ‘wallet’.

We all make mistakes; we are humans after all! But the cost of weak cybersecurity practices is ID theft and lost data, for ourselves and for those we correspond with. So, do not delay your NCSC security audit and follow it up! By adopting some proactive strategies, we can take decisive steps towards protecting ourselves and preserving the sanctity of our digital identities.

The Criminalisation of Cybercrime: Connected Dots and Blind Spots in the Development of Legal Instruments

Photo by Markus Spiske on Unsplash

Building on her 15-year research on cybercrime, Dr. Audrey Guinchard, Senior Lecturer at the Essex Law School, gave a presentation on the criminalisation of cybercrime at the 2022 Society of Legal Scholars (SLS) Conference, held on 6-9 September at King’s College London.

In her paper, Dr. Guinchard explained that regulating crime is the traditional domain of nation states; cybercrime is no exception. The first legal instruments to tackle computer-focused crimes (e.g., unauthorised access or hacking) date back to the seventies and eighties. Yet, international institutions such as the OECD and the Council of Europe have quickly recognised the transborder nature of cybercrime, keen to push for the creation of a level-playing field and better cooperation among nation-states. In fact, we could even argue that international efforts of criminalisation are concomitant, if not anticipatory, of national legal instruments on cybercrime.

Dr. Guinchard pointed out that what is less known behind this push for harmonisation is the role of the computing community, a scientific community which has international dialogue at its heart and which has frequently engaged with legal professionals more than legal professionals have engaged with computer scientists. These key features of the criminalisation of cybercrime continue to shape modern legislation as the movement for reforming the UK Computer Misuse Act demonstrates.

Yet, Dr. Guinchard emphasised that blind spots remain: comparative law analyses can be superficial; the international outlook remained dominated by Western/European countries, ignoring the many voices of Asia, Africa and Latin America; the link between improving cybersecurity and decreasing cybercrime remains unappreciated; and criminalisation can carry hidden agendas which turn the fight against cybercrime into a battleground of values, as the recent push for the UN treaty on cybercrime illustrates.

So, if the transborder nature of cybercrime has long been a rallying cry for its worldwide criminalisation, the resulting legal frameworks continue to be subjected to various influences and forces, acknowledged and unacknowledged, leading to a paucity of information as to how effective the law is in tackling cybercrime. Dr. Guinchard argued that reflecting on those pathways to criminalisation may allow us to move away from these hypes and understatements which have marred the field since its inception.

A copy of Dr. Guinchard’s slides can be downloaded below. She can be contacted at this email address: abguin@essex.ac.uk.

‘Cyber Due Diligence’: A Patchwork of Protective Obligations in International Law

Photo by Kevin Ku

With a long history in international law, the concept of due diligence has recently gained traction in the cyber context, as a promising avenue to hold states accountable for harmful cyber operations originating from, or transiting through, their territory, in the absence of attribution.

Nonetheless, confusion surrounds the nature, content, and scope of due diligence. It remains unclear whether it is a general principle of international law, a self-standing obligation, or a standard of conduct, and whether there is a specific rule requiring diligent behaviour in cyberspace.

This has created an ‘all-or-nothing’ discourse: either states have agreed to a rule or principle of ‘cyber due diligence’, or no obligation to behave diligently would exist in cyberspace.

In their new article in the European Journal of International Law, Dr. Antonio Coco, Lecturer in Law at the University of Essex, and Dr. Talita de Souza Dias, Postdoctoral Research Fellow at the Oxford Institute for Ethics, Law and Armed Conflict (ELAC), propose to shift the debate from label to substance, asking whether states have duties to protect other states and individuals from cyber harms.

By revisiting traditional cases, as well as surveying recent state practice, the authors contend that – whether or not there is consensus on ‘cyber due diligence’ – a patchwork of different protective obligations already applies, by default, in cyberspace.

At their core is a flexible standard of diligent behaviour requiring states to take reasonable steps to prevent, halt and/or redress a range of online harms.

A copy of the authors’ article can be accessed here.


This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted reuse, distribution, and reproduction in any medium provided the original work is properly cited.

Article full citation: Antonio Coco, Talita de Souza Dias, ‘Cyber Due Diligence’: A Patchwork of Protective Obligations in International Law, European Journal of International Law, Volume 32, Issue 3, August 2021, Pages 771–806, https://doi.org/10.1093/ejil/chab056.

Effective Oversight of Large-Scale Surveillance Activities: A Human Rights Perspective

Photo by Lianhao Qu

Daragh Murray, Pete Fussey, Lorna McGregor, and Maurice Sunkin, University of Essex, explore the international human rights law implications of state surveillance in a new article published in the Journal of National Security Law and Policy (JNSLP).

Today, state surveillance involves the large-scale collection and analysis of digital data—activities which allow for widespread monitoring of citizens. And while commentary on the legality of these bulk surveillance regimes has focused on whether this routine surveillance is permissible, the European Court of Human Rights has recently held that, subject to appropriate safeguards, surveillance of this type is legitimate, and sometimes necessary, for national security purposes in a democratic society.

In their analysis, the authors outline the types of oversight mechanisms needed to make large-scale surveillance human rights compliant. To do so, they break down state surveillance into its constituent stages—authorization, oversight, and ex post facto review—and focus their attention on the first two stages of the process.

First, they argue that effective oversight of authorizations requires increasing data access and ensuring independent judicial review.

Second, they argue that effective oversight of ongoing surveillance requires improving technical expertise and providing for long term supervision.

The authors conclude that a “court-plus” model of judicial officers and non-judicial staff would deliver enhanced judicial qualities to authorizations while also providing continuous engagement through ongoing review and supervision.

This post was first published on the JNSLP website and is reproduced here with permisson and thanks. The original piece and a link to the authors’ article can be found here.