
Artificial Intelligence (AI) could revolutionise the world-wide economy as well as the way we live, work and interact with each other. While this new technology certainly presents great potential, it also comes with important risks to human life, health and wellbeing – among other risks.
In an effort to prepare for this new environment, the European Commission has been at the forefront of several initiatives that aim to provide a harmonised regulatory framework for the safe deployment and use of AI systems across Member States [1]. Amongst its most recent initiatives is a public consultation on how to adapt civil liability rules to the digital age and artificial intelligence. This public consultation, which closed on 10 January 2022, aimed to collect views on:
- how to improve the applicability of the Product Liability Directive (PLD) to the digital age, including AI, and
- whether there is a need to further harmonise rules of liability for damage caused by AI systems beyond the PLD.
The consultation is an important first step towards building a robust liability framework fit to address the current and future challenges posed by AI and the digital age in Europe. The changes that could be implemented as a result of the consultation could be immense and produce far-reaching consequences. Understandably, this public consultation attracted a high level of interest from various stakeholders, including businesses (Google, Bosh, Siemens, Avast), consumer organisations (BEUC, France Assos Santé), insurers (AXA, Insurance Europe, France Assureurs), NGOs, interest groups, legal scholars as well as members of the general public. In total, the European Commission received around 300 responses.
Pr. Jonas Knetsch (University of Paris 1 Panthéon-Sorbonne) and Dr. Emmanuelle Lemaire (University of Essex), assembled a small ad hoc research group, comprised of Pr. Michel Cannarsa (The Catholic University of Lyon), Dr. Laurie Friant (University of Paris 1 Panthéon-Sorbonne) and Pr. Simon Taylor (Paris Nanterre University), to produce a report in response to the consultation.
Overall, the authors of this report were of the view that the PLD should be adapted to enhance consumer protection in the digital age and increase legal certainty for all stakeholders. The authors also recognised that AI technology posed specific challenges and recommended that complementary measures be adopted to ensure the safe deployment and use of AI systems across Member States.
Adapting the PLD rules to the digital age and AI
The Product Liability Directive, which came into force on 30 July 1985, was a response to the increasing demand for consumer protection in a hyper-industrialised environment where goods were mass-produced, and mass-consumed. In essence, the Directive aimed to offer a high level of protection to consumers while ensuring that producers did not bear an undue burden. It was thus designed to strike a careful balance between the interests of both consumers and producers.
Yet, we must remember that the Directive was implemented at a time when the Internet was still in its early days, the use of AI remained largely theoretical, marketplaces were positioned in the ‘physical world’, and concepts such as ‘circular economy’ and ‘the Internet of Things’ (IoT) were simply non-existent. To say that the PLD – which did not undergo any major changes since 1985 – is in need of reform is certainly an understatement.
In order to adequately adapt the PLD to the digital age and AI, the authors of the aforementioned report took the view that the scope of application of the PLD should be extended, and in particular that:
- the concept of ‘product’ should be expressly extended to intangible goods,
- the concept of ‘producer’ should be extended to include online marketplaces and remanufacturers,
- the concept of ‘damage’ should be extended to include specific types of immaterial loss (i.e. privacy or data protection infringements not already covered under the General Data Protection Regulation, and damage to, or the destruction of, data).
The authors of the report also recommended the amendment of specific PLD rules in certain situations, and more specifically:
- the suppression of the development risk defence for AI products only,
- the suppression of the 10-year longstop period in case of death or personal injury,
- a clarification of the conditions enabling the 3-year limitation period to start running,
- an alleviation of the burden of proof of ‘defect’ and ‘causation’ for products classified as ‘technically complex’ (which would include AI products and The Internet of Things).
In addition to recommending that the PLD be adapted, the authors of the report were also in favour of the European Commission adopting complementary measures in the context of AI to account for the specific features presented by this technology (autonomy, complexity, opacity, vulnerability, and openness).
Adopting complementary measures in the context of AI
The regulation of AI is proving challenging across legal systems, not least because of the difficulty in defining what AI is and what can be classified as an AI system. The European Commission made a recent effort to try and offer a clear – but open – definition of the term ‘AI system’ to ensure legal certainty while providing the necessary flexibility to accommodate any future technological developments. As the definition currently stands, an AI system means software that is developed with some specific listed techniques and approaches ‘and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.’[2] The definition is quite broad, and in consequence, the range of products based on – or using – AI systems can be diverse and include voice assistants, image analysing software, search engines, speech and face recognition systems, as well as advanced robots, autonomous cars, drones or Internet of Things applications. Not all these products present the same type or level of risk, and some AI-based products are therefore more dangerous than others.
The authors of the report recommended that the European Commission consider:
- the harmonisation of strict liability where AI-based products or services create a ‘serious risk of damage’ to consumers with an option to allow Member States to offer more protective liability rules to consumers,
- the harmonisation of mandatory liability insurance for certain AI products,
- the harmonisation of liability rules regarding the compensation of specific types of immaterial loss beyond the PLD (i.e. privacy or data protection infringements not already covered under the General Data Protection Regulation, and damage to, or the destruction of, data).
If you are interested in knowing more about the recommendations made by this university group to the European Commission, you can find a copy of their report (no. F2771740) – written in French – on the EC website or download it directly from our blog below:
[1] See e.g. European Commission, Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee of the Regions – Artificial Intelligence for Europe (COM(2018) 237 final); European Commission, White Paper on Artificial Intelligence – A European approach to excellence and trust, (COM(2020) 65 final); European Commission, Communication Coordinated Plan on Artificial Intelligence (COM(2021) 205 final); European Commission, Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative Acts (COM(2021) 206 final).
[2] European Commission, Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative Acts (COM(2021) 206 final), Article 3(1).