Unbreakable Shields: Some Tips to Safeguard Your Digital Realm

Image via Shutterstock

By Dr. Audrey Guinchard, Senior Lecturer, Essex Law School

In the vast and interconnected realm of the digital age, our lives have become intrinsically linked to the virtual world. From online banking to social media interactions, our personal and professional activities have found a new home in cyberspace. However, as we embrace the convenience and opportunities offered by the digital revolution, we must also acknowledge the shadows of cyber threats that pose a constant risk to our security.

We live in an era where sophisticated hackers and malicious actors continuously exploit vulnerabilities, seeking to breach our defences and gain unauthorized access to our sensitive information. We have all heard of viruses, ransomwares, phishing attacks, scams… but it’s not always easy to keep on top of best cybersecurity practices in our daily, busy, lives.

Who has never delayed updating their operating systems (OS) for a few days because of the sheer inconvenience of having to stop working and using the digital device for a solid 20 mins?

And what about these annoying passwords? Who never got frustrated when not remembering an obscure combination of letters, numbers and special characters in no logical order? Even the author who recommended this form of password management back in 2003 has regretted his initial advice!

And, how about the apparently preposterous advice of not re-using passwords when one has to remember about 70 to 100 passwords?

The consequences of a successful cyber-attack can be devastating, leading to financial losses, identity theft, and irreparable damage to our digital lives. So,  what is a good starting point for good cybersecurity practices? None is, on its own, fail-proof. It’s their combination that will often delay the attacker who, discouraged, will turn towards easier targets/victims. It’s also about minimising the impact our mistakes may have.

Start with an audit of your practices, so that you know where to begin. The easiest is to answer the questionnaire on the UK National Cybersecurity Centre (NCSC) website: its Cyber Action Plan. It is a truly 1 or 2 min questionnaire; the questions may seem basic but they cut at the heart of the top best practices we can put in place. And follow their detailed advice on what you need to do, advice based on your answers.

Pay particular attention to your passwords. The question to ask oneself is always: if somebody has access to this password, what can they retrieve and find out? Will the password give them access to my bank account? Or to a work account? Or to social media? Or to the three of them?

You can notably check here whether the password has been compromised: or whether the same has happened to your email address here.

You may want to consider a password manager but be aware: your password manager tends to store your data online, so your password data is not immune to hacking, as it happened to the leading company of LastPass in 2022, which won’t be the last.

So, the question is: do you really need this password to be stored online?

For example, if you only do your tax return from home, do you need to save online your password and ID number for HMRC? Because you know that whoever has your HMRC details may well be able to access lots of government services and impersonate you. And ID theft is no fun!

For iPhone users, disable access to the control centre when your phone is locked; otherwise, even with your screen locked, you are giving control of your phone. To do so, go to your Settings, then Face ID (or Touch ID) and passcode, then scroll down to ‘allow access when locked’ (under the voice dial feature) and disable ‘control centre’, ‘accessories’, ‘wallet’.

We all make mistakes; we are humans after all! But the cost of weak cybersecurity practices is ID theft and lost data, for ourselves and for those we correspond with. So, do not delay your NCSC security audit and follow it up! By adopting some proactive strategies, we can take decisive steps towards protecting ourselves and preserving the sanctity of our digital identities.

The Future of AI Liability in Europe

Image by VectorStock

Artificial Intelligence (AI) could revolutionise the world-wide economy as well as the way we live, work and interact with each other. While this new technology certainly presents great potential, it also comes with important risks to human life, health and wellbeing – among other risks.

In an effort to prepare for this new environment, the European Commission has been at the forefront of several initiatives that aim to provide a harmonised regulatory framework for the safe deployment and use of AI systems across Member States [1]. Amongst its most recent initiatives is a public consultation on how to adapt civil liability rules to the digital age and artificial intelligence. This public consultation, which closed on 10 January 2022, aimed to collect views on:

  1. how to improve the applicability of the Product Liability Directive (PLD) to the digital age, including AI, and
  2. whether there is a need to further harmonise rules of liability for damage caused by AI systems beyond the PLD.

The consultation is an important first step towards building a robust liability framework fit to address the current and future challenges posed by AI and the digital age in Europe. The changes that could be implemented as a result of the consultation could be immense and produce far-reaching consequences. Understandably, this public consultation attracted a high level of interest from various stakeholders, including businesses (Google, Bosh, Siemens, Avast), consumer organisations (BEUC, France Assos Santé), insurers (AXA, Insurance Europe, France Assureurs), NGOs, interest groups, legal scholars as well as members of the general public. In total, the European Commission received around 300 responses.

Pr. Jonas Knetsch (University of Paris 1 Panthéon-Sorbonne) and Dr. Emmanuelle Lemaire (University of Essex), assembled a small ad hoc research group, comprised of Pr. Michel Cannarsa (The Catholic University of Lyon), Dr. Laurie Friant (University of Paris 1 Panthéon-Sorbonne) and Pr. Simon Taylor (Paris Nanterre University), to produce a report in response to the consultation.

Overall, the authors of this report were of the view that the PLD should be adapted to enhance consumer protection in the digital age and increase legal certainty for all stakeholders. The authors also recognised that AI technology posed specific challenges and recommended that complementary measures be adopted to ensure the safe deployment and use of AI systems across Member States.

Adapting the PLD rules to the digital age and AI

The Product Liability Directive, which came into force on 30 July 1985, was a response to the increasing demand for consumer protection in a hyper-industrialised environment where goods were mass-produced, and mass-consumed. In essence, the Directive aimed to offer a high level of protection to consumers while ensuring that producers did not bear an undue burden. It was thus designed to strike a careful balance between the interests of both consumers and producers.

Yet, we must remember that the Directive was implemented at a time when the Internet was still in its early days, the use of AI remained largely theoretical, marketplaces were positioned in the ‘physical world’, and concepts such as ‘circular economy’ and ‘the Internet of Things’ (IoT) were simply non-existent. To say that the PLD – which did not undergo any major changes since 1985 – is in need of reform is certainly an understatement.

In order to adequately adapt the PLD to the digital age and AI, the authors of the aforementioned report took the view that the scope of application of the PLD should be extended, and in particular that:

  • the concept of ‘product’ should be expressly extended to intangible goods,
  • the concept of ‘producer’ should be extended to include online marketplaces and remanufacturers,
  • the concept of ‘damage’ should be extended to include specific types of immaterial loss (i.e. privacy or data protection infringements not already covered under the General Data Protection Regulation, and damage to, or the destruction of, data).

The authors of the report also recommended the amendment of specific PLD rules in certain situations, and more specifically:

  • the suppression of the development risk defence for AI products only,
  • the suppression of the 10-year longstop period in case of death or personal injury,
  • a clarification of the conditions enabling the 3-year limitation period to start running,
  • an alleviation of the burden of proof of ‘defect’ and ‘causation’ for products classified as ‘technically complex’ (which would include AI products and The Internet of Things).

In addition to recommending that the PLD be adapted, the authors of the report were also in favour of the European Commission adopting complementary measures in the context of AI to account for the specific features presented by this technology (autonomy, complexity, opacity, vulnerability, and openness).

Adopting complementary measures in the context of AI

The regulation of AI is proving challenging across legal systems, not least because of the difficulty in defining what AI is and what can be classified as an AI system. The European Commission made a recent effort to try and offer a clear – but open – definition of the term ‘AI system’ to ensure legal certainty while providing the necessary flexibility to accommodate any future technological developments. As the definition currently stands, an AI system means software that is developed with some specific listed techniques and approaches ‘and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.’[2] The definition is quite broad, and in consequence, the range of products based on – or using –  AI systems can be diverse and include voice assistants, image analysing software, search engines, speech and face recognition systems, as well as advanced robots, autonomous cars, drones or Internet of Things applications. Not all these products present the same type or level of risk, and some AI-based products are therefore more dangerous than others.

The authors of the report recommended that the European Commission consider:

  • the harmonisation of strict liability where AI-based products or services create a ‘serious risk of damage’ to consumers with an option to allow Member States to offer more protective liability rules to consumers,
  • the harmonisation of mandatory liability insurance for certain AI products,
  • the harmonisation of liability rules regarding the compensation of specific types of immaterial loss beyond the PLD (i.e. privacy or data protection infringements not already covered under the General Data Protection Regulation, and damage to, or the destruction of, data).

If you are interested in knowing more about the recommendations made by this university group to the European Commission, you can find a copy of their report (no. F2771740) – written in French – on the EC website or download it directly from our blog below:


[1] See e.g. European Commission, Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee of the Regions – Artificial Intelligence for Europe (COM(2018) 237 final); European Commission, White Paper on Artificial Intelligence – A European approach to excellence and trust, (COM(2020) 65 final); European Commission, Communication Coordinated Plan on Artificial Intelligence (COM(2021) 205 final); European Commission, Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative Acts (COM(2021) 206 final).

[2] European Commission, Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative Acts (COM(2021) 206 final), Article 3(1).

Prescripted Living: Gender Stereotypes and Data-Based Surveillance in the UK Welfare State

Photo by cottonbro from Pexels

From the post-war welfare state that inherently assumed married women would be supported by their husbands, to the 21st-century introduction of Universal Credit which financially disincentivises some women in cohabiting relations from working: the welfare benefits system in the UK has historically favoured individuals who conform to gender stereotypes.

At the same time, the welfare benefits system also uses more and more surveillance of claimants to determine who is ‘deserving’ of support, using increasingly sophisticated data analysis tools to impose conditions on welfare claimants and punish those who do not comply.

Laura Carter, PhD candidate in the Human Rights, Big Data and Technology Project at the University of Essex’s Human Rights Centre, published a new article in Internet Policy Review, which argues that both stereotyping and surveillance reinforce structures of categorisation – in which individuals are treated according to group membership (whether or not it is accurate) and control, through normalising some behaviours while punishing others.

The article argues that the combination of gender stereotyping and surveillance in the UK welfare state risks creating a vicious cycle, in which the categorisation and control dimensions of both stereotyping and surveillance reinforce each other.

This increases the likelihood of the system coercing welfare claimants—by definition, people living on low incomes—into certain ‘accepted’ behaviours, and discriminating against those who do not conform.

The increased conditionality of welfare benefits has already caused demonstrative harm to those who cannot or struggle to access Universal Credit. The article further argues that the coercive, surveillant nature of the welfare state risks cementing hierarchies of power that continue to stereotype and discriminate against low-income people.

This is the case particularly for low-income women who are expected to balance the demands of their disproportionate unpaid caring responsibilities as well as increasing requirements for job search activities.

Carter’s article applies a human rights analysis—including recognition of the harms of gender stereotyping, as recognised by the Committee on the Elimination of Discrimination against Women (CEDAW Committee) —to this system of coercion and conditionality, in order to make visible analysis the specifically gendered nature of the harm caused by surveillance and conditionality to welfare benefits claimants.

Applying analysis of gender stereotyping can further identify—and combat—harms that are inherent in the current structure of the welfare benefits system in the UK, with the aim of ensuring that benefits are accessible for all who need them.


Article full citation: Carter, L. (2021). Prescripted living: gender stereotypes and data-based surveillance in the UK welfare state. Internet Policy Review, 10(4). https://doi.org/10.14763/2021.4.1593

‘Cyber Due Diligence’: A Patchwork of Protective Obligations in International Law

Photo by Kevin Ku

With a long history in international law, the concept of due diligence has recently gained traction in the cyber context, as a promising avenue to hold states accountable for harmful cyber operations originating from, or transiting through, their territory, in the absence of attribution.

Nonetheless, confusion surrounds the nature, content, and scope of due diligence. It remains unclear whether it is a general principle of international law, a self-standing obligation, or a standard of conduct, and whether there is a specific rule requiring diligent behaviour in cyberspace.

This has created an ‘all-or-nothing’ discourse: either states have agreed to a rule or principle of ‘cyber due diligence’, or no obligation to behave diligently would exist in cyberspace.

In their new article in the European Journal of International Law, Dr. Antonio Coco, Lecturer in Law at the University of Essex, and Dr. Talita de Souza Dias, Postdoctoral Research Fellow at the Oxford Institute for Ethics, Law and Armed Conflict (ELAC), propose to shift the debate from label to substance, asking whether states have duties to protect other states and individuals from cyber harms.

By revisiting traditional cases, as well as surveying recent state practice, the authors contend that – whether or not there is consensus on ‘cyber due diligence’ – a patchwork of different protective obligations already applies, by default, in cyberspace.

At their core is a flexible standard of diligent behaviour requiring states to take reasonable steps to prevent, halt and/or redress a range of online harms.

A copy of the authors’ article can be accessed here.


This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted reuse, distribution, and reproduction in any medium provided the original work is properly cited.

Article full citation: Antonio Coco, Talita de Souza Dias, ‘Cyber Due Diligence’: A Patchwork of Protective Obligations in International Law, European Journal of International Law, Volume 32, Issue 3, August 2021, Pages 771–806, https://doi.org/10.1093/ejil/chab056.

Human Rights Expert Receives Major Funding to Investigate Impact of Algorithms on Democracy

Photo by Ari He

An Essex human rights expert has been awarded major funding to look at the impact of Artificial Intelligence-assisted decision-making on individual development and the functioning of democracy.

Dr Daragh Murray, from the School of Law and Human Rights Centre, is among the latest wave of individuals to receive funding as part of UK Research and Innovation’s Future Leaders Fellowships scheme. Dr Murray has been awarded over £1 million for an initial period of four years, to examine the impact of Artificial Intelligence (AI) assisted decision-making in a range of areas.

Dr Daragh Murray said: “Governments around the world are already using AI to help make important decisions that affect us all. This data-driven approach can offer key benefits, but it also relies on the ever-increasing collection of data on all aspects of our personal and public lives, representing both a step change in the information the state holds on us all, and a transformation in how that information is used.

“I want to look at the unintended consequences of this level of surveillance – the impact on how individuals develop their identity and how democratic society flourishes. Will a chilling effect emerge that changes individual behaviour? And what might the impact of this be? Will the knowledge that our activities are tracked and then translated into government decisions affect how we, for example, develop our sexual identity or our political opinions? Will we all be pushed towards the status quo in fear of the consequences of standing out?

“Ultimately what will the effect of this be on the well-being of our democracy?”

The Future Leaders Fellowships scheme is designed to establish the careers of world-class research and innovation leaders across the UK.

Dr Murray’s project will be interdisciplinary, working across human rights law, sociology and philosophy.

Dr Murray said: “We will be looking at lived experience in the context of wider discussions about how individuals and societies flourish. The intention is to re-imagine the human rights framework to address this very 21st century problem.”

Dr Murray is currently a member of the Human Rights Big Data & Technology Project, based at the University of Essex Human Rights Centre, and the Open Source for Rights Project, based at the University of Swansea. He was co-author with Professor Pete Fussey of the independent report into the Metropolitan Police Service’s trial of live facial recognition, published in July 2019.

He is a recognised expert in the field of Digital Verification, using open source investigation techniques to verify evidence of human rights abuses. He founded Essex Digital Verification Unit (DVU) in 2016 and co-edited Digital Witness, the first textbook in the field, with Sam Dubberley and Alexa Koenig. In 2019, Essex DVU was recognised with a Times Higher Education Award for International Collaboration of the Year, for its role in Amnesty International’s Digital Verification Corps.

The Fellows appoint mentors. In addition to Essex mentors Professor Lorna McGregor and Professor Pete Fussey, Dr Murray will benefit from the involvement of a stellar group of global experts: Professor Yuval Shany, from the Hebrew University of Jerusalem, is Vice-Chair of the United Nations Human Rights Committee, and Deputy President of the Israel Democracy Institute; Professor Ashley Deeks is a Research Professor of Law at University of Virginia Law School, Director of the School’s National Security Law Center and a member of the State Department’s Advisory Committee on International Law; Professor Alexa Koenig is Executive Director of University of California Berkeley’s Human Rights Center and sits on a number of national and international bodies looking at the impact of technology, as well as the board of advisors for ARCHER, a UC Berkeley-established non-profit that “leverages technology to make data-driven investigations accessible, smarter and more scalable.”

Launching the latest round of Future Leaders Fellowships, UK Research and Innovation Chief Executive, Professor Dame Ottoline Leyser, said: “Future Leaders Fellowships provide researchers and innovators with freedom and support to drive forward transformative new ideas and the opportunity to learn from peers right across the country.

“The fellows announced today illustrate how the UK continues to support and attract talented researchers and innovators across every discipline to our universities and businesses, with the potential to deliver change that can be felt across society and the economy.”

This story originally appeared on the University of Essex news webpage and is reproduced here with permission and thanks.

Using Human Rights Law to Inform States’ Decisions to Deploy AI

Photo by fabio

Dr. Daragh Murray, Senior Lecturer in Law, University of Essex, has a new publication in the American Journal of International Law (AJIL) Unbound (Vol. 14, pp. 158-162) as part of a special edition asking ‘How Will Artificial Intelligence Affect International Law?‘.

The article, titled ‘Using Human Rights Law to Inform States’ Decisions to Deploy AI’ argues that states are investing heavily in artificial intelligence (AI) technology and are actively incorporating AI tools across the full spectrum of their decision-making processes. However, AI tools are currently deployed without a full understanding of their impact on individuals or society, and in the absence of effective domestic or international regulatory frameworks.

Although this haste to deploy is understandable given AI’s significant potential, it is unsatisfactory. The inappropriate deployment of AI technologies risks litigation, public backlash, and harm to human rights. In turn, this is likely to delay or frustrate beneficial AI deployments.

This essay suggests that human rights law offers a solution. It provides an organizing framework that states should draw on to guide their decisions to deploy AI (or not), and can facilitate the clear and transparent justification of those decisions.

This is an Open Access article, available in full here, distributed under the terms of the Creative Commons Attribution licence.