The Online Safety Bill: Where Are We Now and Will It Succeed?

Image via Shutterstock

The House of Lords is currently debating at Committee Stage the Online Safety Bill, a landmark piece of legislation which introduces a new set of internet laws to protect children and adults from online harms.

The Bill will establish a regulatory framework for certain online services. These include user-to-user services, such as Instagram, Twitter and Facebook, and search services, such as Google.

The UK government’s stated aim in introducing the Bill is “to make Britain the best place in the world to set up and run a digital business, while simultaneously ensuring that Britain is the safest place in the world to be online”.

The BIll will place duties of care on both regulated user-to-user service providers and regulated search service providers. The regulated service providers would have duties relating to, among other things: (a) illegal content; (b) protecting children; (c) user empowerment; (d) content of democratic importance, news publisher content and journalistic content; (e) freedom of expression and privacy; and (f) fraudulent advertising.

The Bill also does two other distinct but interconnected things. It introduces age-verification requirements in relation to pornography providers (which are not user-to-user); as well as new criminal offences, e.g., encouraging self-harm and epilepsy trolling.

This makes it a long, wide-ranging and complex Bill.

Moreover, the Bill will place more responsibility on technology giants to keep their users safe. It will give Ofcom, the UK’s communications regulator, the power to levy fines against non-compliant providers, and would make senior managers liable to imprisonment for not complying with a direction to provide Ofcom with information.

But what impact is the BIll expected to have? What concerns are there about the implementation of this new regime?

Prof. Lorna Woods (Professor of Internet Law, University of Essex), who devised the systems-based approach to online regulation that has been adopted by the Government and whose work is widely regarded as laying the groundwork for the UK’s Online Safety Bill, was recently interviewed on this new regulatory approach.

Photo by Austin Distel via Unsplash

On 11 May 2023, Prof. Woods stepped inside BBC Radio 4’s Briefing Room to be interviewed by David Aaronovitch. She talked about what is actually in the Bill, how the new internet laws are intended to work and what potential weaknesses still remain. The programme can be accessed here.

Prof. Woods also joined Conan D’Arcy of the Global Counsel tech policy team to talk about the UK tech regulation, discuss recent criticisms of the Online Safety Bill, as well as the regulation of generative AI tools like ChatGPT. You can listen to the podcast here (published on 17 May 2023).

The Criminalisation of Cybercrime: Connected Dots and Blind Spots in the Development of Legal Instruments

Photo by Markus Spiske on Unsplash

Building on her 15-year research on cybercrime, Dr. Audrey Guinchard, Senior Lecturer at the Essex Law School, gave a presentation on the criminalisation of cybercrime at the 2022 Society of Legal Scholars (SLS) Conference, held on 6-9 September at King’s College London.

In her paper, Dr. Guinchard explained that regulating crime is the traditional domain of nation states; cybercrime is no exception. The first legal instruments to tackle computer-focused crimes (e.g., unauthorised access or hacking) date back to the seventies and eighties. Yet, international institutions such as the OECD and the Council of Europe have quickly recognised the transborder nature of cybercrime, keen to push for the creation of a level-playing field and better cooperation among nation-states. In fact, we could even argue that international efforts of criminalisation are concomitant, if not anticipatory, of national legal instruments on cybercrime.

Dr. Guinchard pointed out that what is less known behind this push for harmonisation is the role of the computing community, a scientific community which has international dialogue at its heart and which has frequently engaged with legal professionals more than legal professionals have engaged with computer scientists. These key features of the criminalisation of cybercrime continue to shape modern legislation as the movement for reforming the UK Computer Misuse Act demonstrates.

Yet, Dr. Guinchard emphasised that blind spots remain: comparative law analyses can be superficial; the international outlook remained dominated by Western/European countries, ignoring the many voices of Asia, Africa and Latin America; the link between improving cybersecurity and decreasing cybercrime remains unappreciated; and criminalisation can carry hidden agendas which turn the fight against cybercrime into a battleground of values, as the recent push for the UN treaty on cybercrime illustrates.

So, if the transborder nature of cybercrime has long been a rallying cry for its worldwide criminalisation, the resulting legal frameworks continue to be subjected to various influences and forces, acknowledged and unacknowledged, leading to a paucity of information as to how effective the law is in tackling cybercrime. Dr. Guinchard argued that reflecting on those pathways to criminalisation may allow us to move away from these hypes and understatements which have marred the field since its inception.

A copy of Dr. Guinchard’s slides can be downloaded below. She can be contacted at this email address: abguin@essex.ac.uk.

How Tech Companies Can Tackle Violence Against Women and Girls (VAWG): A VAWG Code of Practice

Photo by Katherine Hanlon

The Online Safety Bill presents an opportunity to address violence against women and girls in its digital dimensions and hold accountable the tech platforms that profit from this abuse.

But if the new law passes in its current format, it will leave women and girls facing violence and the threat of harm in their everyday online interactions.

In an event, co-hosted by Maria Miller MP and Baroness Nicky Morgan, experts, and leading organisations will discuss how the Online Safety Bill provides an essential vehicle to hold tech companies accountable for preventing and tackling VAWG and why a VAWG Code of Practice must accompany the Bill, to ensure tech companies take proactive steps to prevent VAWG in a comprehensive and systematic way.

The End Violence Against Women Coalition, Carnegie UK, Lorna Woods (Essex Law School), Clare McGlynn (Durham Law School), Glitch, NSPCC, Refuge, and 5Rights have worked together to develop a VAWG Code of Practice (CoP) that meets the rights and needs of women and girls, including those experiencing intersecting inequalities.

The CoP sets out how the regulator will recommend tech meets their legal obligations to identify, respond to and prevent VAWG on their platforms. A copy of the CoP can be downloaded here:

This would create safer online spaces for women and girls – spaces where action is taken to prevent abuse, perpetrators and the platforms that ignore this abuse face consequences, and our self-expression is not restricted by the threat of violence.

Register for the event here.

The Future of AI Liability in Europe

Image by VectorStock

Artificial Intelligence (AI) could revolutionise the world-wide economy as well as the way we live, work and interact with each other. While this new technology certainly presents great potential, it also comes with important risks to human life, health and wellbeing – among other risks.

In an effort to prepare for this new environment, the European Commission has been at the forefront of several initiatives that aim to provide a harmonised regulatory framework for the safe deployment and use of AI systems across Member States [1]. Amongst its most recent initiatives is a public consultation on how to adapt civil liability rules to the digital age and artificial intelligence. This public consultation, which closed on 10 January 2022, aimed to collect views on:

  1. how to improve the applicability of the Product Liability Directive (PLD) to the digital age, including AI, and
  2. whether there is a need to further harmonise rules of liability for damage caused by AI systems beyond the PLD.

The consultation is an important first step towards building a robust liability framework fit to address the current and future challenges posed by AI and the digital age in Europe. The changes that could be implemented as a result of the consultation could be immense and produce far-reaching consequences. Understandably, this public consultation attracted a high level of interest from various stakeholders, including businesses (Google, Bosh, Siemens, Avast), consumer organisations (BEUC, France Assos Santé), insurers (AXA, Insurance Europe, France Assureurs), NGOs, interest groups, legal scholars as well as members of the general public. In total, the European Commission received around 300 responses.

Pr. Jonas Knetsch (University of Paris 1 Panthéon-Sorbonne) and Dr. Emmanuelle Lemaire (University of Essex), assembled a small ad hoc research group, comprised of Pr. Michel Cannarsa (The Catholic University of Lyon), Dr. Laurie Friant (University of Paris 1 Panthéon-Sorbonne) and Pr. Simon Taylor (Paris Nanterre University), to produce a report in response to the consultation.

Overall, the authors of this report were of the view that the PLD should be adapted to enhance consumer protection in the digital age and increase legal certainty for all stakeholders. The authors also recognised that AI technology posed specific challenges and recommended that complementary measures be adopted to ensure the safe deployment and use of AI systems across Member States.

Adapting the PLD rules to the digital age and AI

The Product Liability Directive, which came into force on 30 July 1985, was a response to the increasing demand for consumer protection in a hyper-industrialised environment where goods were mass-produced, and mass-consumed. In essence, the Directive aimed to offer a high level of protection to consumers while ensuring that producers did not bear an undue burden. It was thus designed to strike a careful balance between the interests of both consumers and producers.

Yet, we must remember that the Directive was implemented at a time when the Internet was still in its early days, the use of AI remained largely theoretical, marketplaces were positioned in the ‘physical world’, and concepts such as ‘circular economy’ and ‘the Internet of Things’ (IoT) were simply non-existent. To say that the PLD – which did not undergo any major changes since 1985 – is in need of reform is certainly an understatement.

In order to adequately adapt the PLD to the digital age and AI, the authors of the aforementioned report took the view that the scope of application of the PLD should be extended, and in particular that:

  • the concept of ‘product’ should be expressly extended to intangible goods,
  • the concept of ‘producer’ should be extended to include online marketplaces and remanufacturers,
  • the concept of ‘damage’ should be extended to include specific types of immaterial loss (i.e. privacy or data protection infringements not already covered under the General Data Protection Regulation, and damage to, or the destruction of, data).

The authors of the report also recommended the amendment of specific PLD rules in certain situations, and more specifically:

  • the suppression of the development risk defence for AI products only,
  • the suppression of the 10-year longstop period in case of death or personal injury,
  • a clarification of the conditions enabling the 3-year limitation period to start running,
  • an alleviation of the burden of proof of ‘defect’ and ‘causation’ for products classified as ‘technically complex’ (which would include AI products and The Internet of Things).

In addition to recommending that the PLD be adapted, the authors of the report were also in favour of the European Commission adopting complementary measures in the context of AI to account for the specific features presented by this technology (autonomy, complexity, opacity, vulnerability, and openness).

Adopting complementary measures in the context of AI

The regulation of AI is proving challenging across legal systems, not least because of the difficulty in defining what AI is and what can be classified as an AI system. The European Commission made a recent effort to try and offer a clear – but open – definition of the term ‘AI system’ to ensure legal certainty while providing the necessary flexibility to accommodate any future technological developments. As the definition currently stands, an AI system means software that is developed with some specific listed techniques and approaches ‘and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.’[2] The definition is quite broad, and in consequence, the range of products based on – or using –  AI systems can be diverse and include voice assistants, image analysing software, search engines, speech and face recognition systems, as well as advanced robots, autonomous cars, drones or Internet of Things applications. Not all these products present the same type or level of risk, and some AI-based products are therefore more dangerous than others.

The authors of the report recommended that the European Commission consider:

  • the harmonisation of strict liability where AI-based products or services create a ‘serious risk of damage’ to consumers with an option to allow Member States to offer more protective liability rules to consumers,
  • the harmonisation of mandatory liability insurance for certain AI products,
  • the harmonisation of liability rules regarding the compensation of specific types of immaterial loss beyond the PLD (i.e. privacy or data protection infringements not already covered under the General Data Protection Regulation, and damage to, or the destruction of, data).

If you are interested in knowing more about the recommendations made by this university group to the European Commission, you can find a copy of their report (no. F2771740) – written in French – on the EC website or download it directly from our blog below:


[1] See e.g. European Commission, Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee of the Regions – Artificial Intelligence for Europe (COM(2018) 237 final); European Commission, White Paper on Artificial Intelligence – A European approach to excellence and trust, (COM(2020) 65 final); European Commission, Communication Coordinated Plan on Artificial Intelligence (COM(2021) 205 final); European Commission, Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative Acts (COM(2021) 206 final).

[2] European Commission, Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative Acts (COM(2021) 206 final), Article 3(1).

The Use of Digital Reconstruction Technology in International Law

Photo by Sajad Nori

Digital reconstructions of crime scenes have been used more frequently in both domestic and international courts as technology becomes more developed and accessible to courtroom actors.

Though digital reconstructions can be beneficial, especially in the context of international criminal law, as they allow judges to visit crime scenes that would otherwise be too expensive or dangerous to travel to in person, there are inherent risks that come with the use of this novel type of evidence in a court of law.

Sarah Zarmsky, a doctoral candidate with the Human Rights Centre at the University of Essex, published an article titled ‘Why Seeing Should Not Always Be Believing: Considerations Regarding the Use of Digital Reconstruction Technology in International Law‘ in the Journal of International Criminal Justice (JICJ).

Sarah’s article explores some key considerations which arise if digital reconstructions are to be used in international criminal courts and tribunals, with an emphasis on the rights of the accused and effects on victims and witnesses.

The article argues that in order for fair trial standards to be upheld and for international courts to fulfil their roles not just as prosecutors of crimes, but as seekers of truth and reconciliation, digital reconstructions need to be approached with caution and analysed through a critical eye.

Sarah will present her paper as part of the Launch Event for the JICJ Special Issue on New Technologies and the Investigation of International Crimes, which will be held virtually on 9 November 2021 at 15:30-17:00 GMT.

This event will bring the authors of articles in the special issue together, including Essex Law School’s Dr. Daragh Murray who also contributed to the same issue and served as one of its co-editors, for a discussion of their key insights on the future role of technology in accountability processes. Those interested in attending can register here.

ICO Targets Companies for Seeking to Illegally Make Profit from the Current Public Health Emergency

Photo by Adomas Aleno

Dr. Alexandros Antoniou, Lecturer in Media Law, University of Essex

On 24 September and 8 October 2020, the Information Commissioner’s Office (ICO), the United Kingdom’s independent body established to uphold information rights, imposed fines on two companies for sending thousands of nuisance marketing texts and unlawful marketing emails at the height of the current pandemic.

In September 2020, Digital Growth Experts Limited (DGEL) was issued with a monetary penalty of GBP 60,000 in relation to a serious contravention of Regulations 22 and 23 of the Privacy and Electronic Communications (EC Directive) Regulations 2003 (PECR). The PECR provide for specific privacy rights in relation to electronic communications. They include rules on marketing calls, emails, texts and faxes; cookies (and similar technologies); keeping communications services secure; as well as on customer privacy in relation to traffic and location data, itemised billing, line identification, and directory listings. Under the 2003 Regulations, ICO has the power to impose a monetary penalty of up to GBP 500,000 on a data controller.

The Commissioner found that between 29 February and 30 April 2020, DGEL had transmitted 16,190 direct marketing texts promoting a hand sanitising product, which was claimed to be “effective against coronavirus”. The company came to the attention of the Commissioner after several complaints were received via the GSMA’s spam reporting tool (the GSMA is an organisation that represents the interests of mobile operators worldwide).

In the course of the investigation, DGEL was unable to provide sufficient evidence of valid consent (as required by PECR) for any of the messages delivered to subscribers over the relevant period. The company’s explanations for its practices and the means by which it had obtained the data used for its direct marketing were found to be “unclear and inconsistent”.

DGEL had also used data obtained via social media ads which purported to offer free samples of the product to individuals, to automatically opt them into receiving direct marketing without advising them that their data would be used for this purpose, and without giving them (at the point the data was collected) a simple way of refusing the use of their contact details for direct marketing.

In October 2020, ICO again took action against a London-based software design consultancy, Studios MG Limited (SMGL), which had sent spam emails selling face masks during the pandemic. The company was fined GBP 40,000 for having transmitted unsolicited communications by means of electronic mail for the purposes of direct marketing, contrary to Regulation 22 of PECR.

More specifically, on 30 April – in the midst of the pandemic – SMGL sent up to 9 000 unlawful marketing emails to people without their permission. SMGL did not hold any evidence of consent for the individuals it had engaged in its one-day direct marketing campaign. ICO held that SMGL’s campaign had been made possible by using “data which had been scraped from various vaguely defined sources”.

ICO’s examination also found that SMGL’s director had decided to buy face masks to sell on at a profit, despite the fact that the company bore no apparent relation to the supplying of personal protective equipment (PPE). Moreover, it was impossible in SMGL’s case to determine the total number of individuals whose privacy had been affected, as the company had deleted a database with key data evidencing the full extent of the volume of emails delivered.

During the pandemic, ICO has been investigating several companies as part of its efforts to protect people from exploitation by unlawful marketing-related data processing activities. The ICO Head of Investigations said in a statement that DGEL “played upon people’s concerns at a time of great public uncertainty, acting with a blatant disregard for the law, and all in order to feather its own pockets.” A hard line was also taken in relation to SMGL. The Head of Investigations stated that “nuisance emails are never welcome at any time, but especially when people may be feeling vulnerable or worried and their concerns heightened.”

This article first appeared on the IRIS Merlin database of the European Audiovisual Observatory and is reproduced here with permission and thanks. Read the original article here.

Human Rights Expert Receives Major Funding to Investigate Impact of Algorithms on Democracy

Photo by Ari He

An Essex human rights expert has been awarded major funding to look at the impact of Artificial Intelligence-assisted decision-making on individual development and the functioning of democracy.

Dr Daragh Murray, from the School of Law and Human Rights Centre, is among the latest wave of individuals to receive funding as part of UK Research and Innovation’s Future Leaders Fellowships scheme. Dr Murray has been awarded over £1 million for an initial period of four years, to examine the impact of Artificial Intelligence (AI) assisted decision-making in a range of areas.

Dr Daragh Murray said: “Governments around the world are already using AI to help make important decisions that affect us all. This data-driven approach can offer key benefits, but it also relies on the ever-increasing collection of data on all aspects of our personal and public lives, representing both a step change in the information the state holds on us all, and a transformation in how that information is used.

“I want to look at the unintended consequences of this level of surveillance – the impact on how individuals develop their identity and how democratic society flourishes. Will a chilling effect emerge that changes individual behaviour? And what might the impact of this be? Will the knowledge that our activities are tracked and then translated into government decisions affect how we, for example, develop our sexual identity or our political opinions? Will we all be pushed towards the status quo in fear of the consequences of standing out?

“Ultimately what will the effect of this be on the well-being of our democracy?”

The Future Leaders Fellowships scheme is designed to establish the careers of world-class research and innovation leaders across the UK.

Dr Murray’s project will be interdisciplinary, working across human rights law, sociology and philosophy.

Dr Murray said: “We will be looking at lived experience in the context of wider discussions about how individuals and societies flourish. The intention is to re-imagine the human rights framework to address this very 21st century problem.”

Dr Murray is currently a member of the Human Rights Big Data & Technology Project, based at the University of Essex Human Rights Centre, and the Open Source for Rights Project, based at the University of Swansea. He was co-author with Professor Pete Fussey of the independent report into the Metropolitan Police Service’s trial of live facial recognition, published in July 2019.

He is a recognised expert in the field of Digital Verification, using open source investigation techniques to verify evidence of human rights abuses. He founded Essex Digital Verification Unit (DVU) in 2016 and co-edited Digital Witness, the first textbook in the field, with Sam Dubberley and Alexa Koenig. In 2019, Essex DVU was recognised with a Times Higher Education Award for International Collaboration of the Year, for its role in Amnesty International’s Digital Verification Corps.

The Fellows appoint mentors. In addition to Essex mentors Professor Lorna McGregor and Professor Pete Fussey, Dr Murray will benefit from the involvement of a stellar group of global experts: Professor Yuval Shany, from the Hebrew University of Jerusalem, is Vice-Chair of the United Nations Human Rights Committee, and Deputy President of the Israel Democracy Institute; Professor Ashley Deeks is a Research Professor of Law at University of Virginia Law School, Director of the School’s National Security Law Center and a member of the State Department’s Advisory Committee on International Law; Professor Alexa Koenig is Executive Director of University of California Berkeley’s Human Rights Center and sits on a number of national and international bodies looking at the impact of technology, as well as the board of advisors for ARCHER, a UC Berkeley-established non-profit that “leverages technology to make data-driven investigations accessible, smarter and more scalable.”

Launching the latest round of Future Leaders Fellowships, UK Research and Innovation Chief Executive, Professor Dame Ottoline Leyser, said: “Future Leaders Fellowships provide researchers and innovators with freedom and support to drive forward transformative new ideas and the opportunity to learn from peers right across the country.

“The fellows announced today illustrate how the UK continues to support and attract talented researchers and innovators across every discipline to our universities and businesses, with the potential to deliver change that can be felt across society and the economy.”

This story originally appeared on the University of Essex news webpage and is reproduced here with permission and thanks.

ICO’s Age Appropriate Design Code of Practice Comes Into Effect

Photo by Igor Starkov

Dr. Alexandros Antoniou, Lecturer in Media Law, University of Essex

On 2 September 2020, the Information Commissioner’s Office (ICO), the United Kingdom’s independent body established to uphold information rights, formally issued its Age Appropriate Design Code of Practice which should be followed by online services to protect children’s privacy.

The Age Appropriate Design Code of Practice, the first of its kind, is a statutory code required under Section 123 of the Data Protection Act 2018 and aims to address the increasing “datafication” of children. The Code was first published on 12 August 2020 and, following completion of its parliamentary stages, it came into force on 2 September 2020. The Information Commissioner, Elizabeth Denham CBE, stated: “For all the benefits the digital economy can offer children, we are not currently creating a safe space for them to learn, explore and play. This statutory Code of Practice looks to change that, not by seeking to protect children from the digital world, but by protecting them within it.”

The Code’s primary focus is to set a benchmark for the appropriate protection of children’s personal data and provide default settings which ensure that children have the best possible access to online services whilst minimising data collection and use, by default. It sets out 15 standards on data collection and protection, and reflects a risk-based approach. Section 123(7) of the DPA 2018 defines “standards of age-appropriate design” as “such standards of age-appropriate design of such services as appear to the Commissioner to be desirable having regard to the best interests of children.” The 15 points of the Age Appropriate Design Code include a duty to conduct data protection impact assessments; transparency; policy and community standards; data sharing and minimisation; geolocation; parental controls; nudge techniques; and online tools, among others. For a brief overview of the standards laid out in the Code, see here. Due to the fact that different services will need to implement various technical solutions, the ICO acknowledges that these are not intended as technical standards, but as a bundle of technology-neutral design principles and practical privacy features.

These principles apply to any online products or services (including, for instance, educational websites, social media platforms, apps, online games, and connected toys with or without a screen) that process personal data and are likely to be used by children under 18 in the UK; therefore, they are not limited to services specifically aimed at children. The Code covers entities based in the UK as well as entities based outside of the UK if their services are provided to (or monitor) users based in the UK. Services provided on an indirect charging basis (for example, funded by advertising) also fall within its remit.

The ICO and the courts will take the Code into account in determining whether the GDPR and PECR requirements have been met for the purposes of enforcement action. Although the Code is now in effect, the industry has been given a 12-month implementation period to get up to speed and introduce suitable changes. After a year in force, the ICO will undertake a review of the Code and its effectiveness.

This article was first published in the 9th issue of IRIS Legal Observations of the European Audiovisual Observatory and is reproduced here with permission and thanks.

The Oxford Statement on International Law Protections Against Foreign Electoral Interference through Digital Means

Photo by Joshua Sortino

Dr. Antonio Coco, Lecturer in Law at the University of Essex, has co-drafted The Oxford Statement on International Law Protections Against Foreign Electoral Interference through Digital Means, which has been signed by 139 international lawyers so far.

The Statement is the third in a series — informally known as the “Oxford Process” — aiming to clarify the rules of international law applicable to cyber operations which threaten areas of pressing global concern.

The first Statement (May 2020) concerned the protection of the healthcare sector. The second Statement (July 2020) focused on the protection of vaccine research. The third and most recent one (October 2020) tackles foreign electoral interference, and can be read at EJIL:Talk!Opinio Juris and JustSecurity.

Reforming Cybercrime Legislations to Support Vulnerability Research: the UK Experience and Beyond

CODE BLUE (29-30 October 2020) is an international conference where the world’s top information security specialists gather to give cutting edge talks, and is a place for all participants to exchange information and interact beyond borders and languages. As technology and society move forward and IoT (Internet of Things) is becoming a reality, security is increasingly becoming an urgent issue. The Internet world also needs to gather researchers to collaborate and think together about ways to respond to emergency situations, and come up with possible solutions. CODE BLUE aims to be a place where international connections and communities form and grow, and will contribute to a better Internet world by connecting people through CODE (technology), beyond and across the BLUE (oceans).

This year, Dr Audrey Guinchard (Senior Lecturer in Law, University of Essex) gave a keynote on ‘Reforming cybercrime legislations to support vulnerability research: the UK experience and beyond’.

Cybercrime legislations – or hacking laws- tend to be notoriously broad, resting on a set of assumptions about what ‘unauthorised access’ means, assumptions which hardly match those of the technical or ethical fields. The result is that the offences of unauthorised access and misuse of tools have the potential to criminalise most aspects of legitimate vulnerability research (discovery, proof of concept, disclosure). Independent security researchers are notably at risk of criminal prosecution as they work, by definition, without vendors’ prior authorisation. 

The UK is a particular case in point, having drafted its original Computer Misuse Act 1990 in such a way that even switching a computer on can constitute unauthorised access. Further reforms in 2006 and 2015 have expanded even more the scope of the legislation by modifying or adding other offences as broad in scope as the original ones. While the UK is in that respect an outlier, the EU Directive 2013/40/EU on attacks against information systems as well as the Convention on cybercrime n.185 (which is de facto the international treaty) are not without their own weaknesses, despite serious and effective efforts to restrict the scope of criminal law and protect security researchers.

Prosecution guidelines or a memorandum of understanding between the security industry and prosecutorial authorities are a welcome step to avoid outlandish prosecution of security researchers, but I argue that they are not sufficient to protect them once a prosecution starts. Their motive (and the methods used) to improve security will not constitute a legal argument unless a public interest defence exists.

Hence, Audrey’s proposal to reform the cybercrime legislations (UK, EU and the Convention) by incorporating a public interest defence to cybercrime offences, in particular to the ‘hacking’ offence (unauthorised access). Momentum is certainly gathering in the UK. The Criminal Law Reform Now network (CLRNN) has now released a comprehensive study of the UK Computer Misuse Act with a series of recommendations. It is time to make cybercrime legislations fit for the 21st Century, to borrow the slogan of a significant part of the security industry in the UK endorsing the report and the reform.

To read some of Dr Guinchard’s research papers which formed the background of this research, please see here and here.