Enhancing Cross-Border Access to Electronic Information in Criminal Proceedings: Towards a new E-Evidence legal framework in the EU

Photo by Christian Lue on Unsplash

Dr Oriola Sallavaci, Senior Lecturer in Law, University of Essex

In recent years cross-border exchange of electronic information has become increasingly important to enable criminal investigations and prosecutions. As I have discussed in depth in my study “Rethinking Criminal Justice in Cyberspace: The EU E-evidence framework as a new model of cross-border cooperation in criminal matters” the use of technology has transformed the nature of crime and evidence leading to ‘crime without borders’ and ‘globalisation of evidence’. An increasing number of criminal investigations rely on e-evidence and this goes beyond cyber-dependent and cyber-enabled crimes. From an evidential point of view, today almost every crime could have an e-evidence element as often offenders use technology, such as personal computers, notepads, and camera phones, where they leave traces of their criminal activity, communications or other pieces of information that can be used to determine their whereabouts, plans or connection to a particular criminal activity. 

Crime today often has a cyber component and with it an increasingly prominent cross border dimension because electronic information to be used for investigative or evidentiary purposes is frequently stored outside of the investigating State. The borderless nature of cyberspace, the sophistication of the technologies and offenders’ modii operandi pose specific and novel challenges for crime investigation and prosecution that, in practice, may lead to impunity.  In 2018 the European Commission found that in the EU “more than half of all investigations involve a cross-border request to access [electronic] evidence.” Yet, alarmingly, “almost two thirds of crimes involving cross-border access to e-evidence cannot be effectively investigated or prosecuted”. Challenges to accessibility relate inter alia to the volatility of e-information, availability and the location of data, as well as the legislative barriers and shortcomings that must be overcome to enhance cross-border access to electronic evidence and the effectiveness of public-private cooperation through facilitated information exchange.

Cross border access to e-information is currently conducted through traditional judicial cooperation channels and requests are often addressed to specific states which are hosts to many service providers (SP). In the EU these include Mutual Legal Assistance requests and European Investigation Orders according to Directive 2014/41/EU which provides for the acquisition, access and production of evidence in one Member State (MS) for criminal investigations and proceedings in another Member State.  The nature of the existing judicial cooperation instruments, actors and procedures involved, and the ever-increasing number of requests have resulted in delays and inefficiencies, posing specific problems for investigations and prosecutions that are exacerbated by the volatility of electronic information.

In the EU, there is no harmonised framework for law enforcement cooperation with service providers. In recent years, Member States have increasingly relied on voluntary direct cooperation channels with service providers, applying different national tools, conditions and procedures. Service providers may accept direct requests from LEAs for non-content data as permitted by their applicable domestic law. However, the fragmented legal framework creates challenges for law enforcement, judicial authorities and service providers seeking to comply with legal requests, as they are increasingly faced with legal uncertainty and, potentially, conflicts of law.

Cross border access to electronic information requires legal instruments that are capable of efficiently supporting criminal investigations and prosecutions and that, at the same time, have in place adequate conditions and safeguards that ensure full compliance with fundamental rights and principles recognised in Article 6 of the Treaty on European Union, the EU Charter of Fundamental Rights and the European Convention on Human Rights, in particular the principles of necessity, legality and proportionality, due process, protection of privacy and personal data, confidentiality of communications, the right to an effective remedy and to a fair trial, the presumption of innocence and procedural rights of defence, as well as the right not to be tried or punished twice in criminal proceedings for the same criminal offence.

In order to achieve these objectives and overcome difficulties present in the existing mechanisms of cross-border cooperation, in April 2018 the EU Commission proposed an important legislative package referred to as “E-evidence”, aimed at facilitating the access to e- evidence by European law enforcement agencies (LEAs). The framework contains two legislative measures: a Regulation which provides two new mechanisms for LEA’s cross border access to e-evidence: European Production Order and European Preservation Order which are to be addressed directly by LEAs of the issuing MS to a service provider, and a  Directive which requires every online service provider “established” in or that has “substantial connection” to at least one EU Member State to appoint a legal representative in the territory of an EU MS of choice as an addressee for the execution of the above Orders.

On 7 December 2018 the Council adopted its own draft (known as Council’s “general approach”) and after two years of delays caused partially from the EU parliamentary elections and the Covid-19 pandemic, on 11 December 2020 The EU Parliament adopted its position. On 10 February 2021 the ‘trilogue’ procedures amid the EU Parliament, the Council, and the Commission started in order to agree to a common text. In the study cited above, I have analysed in depth the key legal provisions contained in the Commission’s proposal, the Council’s draft and the report of the LIBE’s rapporteur Birgit Sippel, presented to the EU Parliament in 2020. Considering that the E-evidence framework is currently being negotiated, the study’s analysis and findings aim to contribute to achieving the best version of the forthcoming instruments.

The EU E-evidence framework is of particular importance in shaping the future of similar instruments and the terms of cooperation between countries all over the world. To a certain extent, it follows the US CLOUD Act 2018 that in itself marks a major change in how cross-border access to e-evidence may develop in the rest of the world. The EU E-evidence framework shall influence and at the same time needs to conform to a number of new agreements currently being negotiated. In 2019 the EU Commission received a negotiating mandate to achieve an agreement between the EU and US, as well as to shape the second amending protocol of the Cybercrime Convention (CCC). Both these instruments need be negotiated from the perspective of the forthcoming E-evidence framework, therefore it is important that the latter offers provisions that increase the efficiency of investigations and prosecutions by surpassing challenges in cross-border cooperation, while maintaining safeguards to fundamental rights of individuals.

The E-Evidence legislative package lays down the rules under which, in a criminal proceeding, a competent judicial authority in the European Union may directly order a service provider offering services in the Union to produce or preserve electronic information that may serve as evidence through a European Production or Preservation Order. This framework will be applicable in all cross-border cases where the service provider has its main establishment or is legally represented in another Member State. The framework aims to complement the existing EU law and to clarify the rules of the cooperation between law enforcement, judicial authorities and service providers in the field of electronic information.  The new measures for cross border access to e-evidence will not supersede European Investigation Orders under Directive 2014/41/EU or Mutual Legal Assistance procedures to obtain electronic information. Member States’ authorities are expected to choose the tool most adapted to their situation. However, authorities of the Member States will be allowed to issue domestic orders with extraterritorial effects for the production or preservation of electronic information that could be requested on the basis of the e -evidence Framework.

Despite expected improvements in the efficiency of investigations and prosecutions by simplifying and speeding up the procedures, the necessity of having a new legal framework to organize cross-border access to electronic evidence has been questioned.  The proposed e-evidence framework is perceived as adding another layer to the already complex tableau of existing, multiple channels for data access and transnational cooperation.   While alternative approaches have been considered and could have been taken by the Commission, as I have argued in depth elsewhere, a specific framework dedicated to improving access to e-evidence is more suitable to help achieve that goal than amendments to existing procedures and instruments that are general in scope and do not provide for the specific e-information  related challenges. Procedural improvements to existing cross border cooperation instruments are necessary, but not by themselves sufficient to overcome the present difficulties and inefficiencies. It is not possible to adequately respond to novel challenges with old mechanisms embedded in lengthy procedures and bureaucratic complexities. The answer is to provide adequate safeguards that protect fundamental rights and the interests of all stakeholders, suited to the new type of instruments created by the e-evidence framework, albeit not identical to the ones found in existing mechanisms of transnational cooperation.

The E-evidence model builds upon the existing models of cooperation yet is fundamentally different. The extraterritorial dimension of the framework affects the traditional concept of territorial sovereignty and jurisdiction. It departs from the traditional rule of international cooperation that cross-border access to electronic information requires consent of the state where the data is stored.  Most importantly, jurisdiction is no longer linked to the location of data. According to the new approach, the jurisdiction of the EU and its MSs can be established over SPs offering their services in the Union and this requirement is met if the SP enables other persons in (at least) one MS to use its services and has a substantial connection to this MS.  In this way the framework avoids the difficulties in establishing the place where the data is stored and the “loss of location” problem. E-evidence framework is a clear example of the development of the concept of territorial jurisdiction in criminal law and the evolvement of connecting factors that establish it, in line with the requirements of legal certainty.

The extraterritorial reach of judicial and state authorities’ decisions in the E-evidence framework introduces a new dimension in mutual recognition, beyond the traditional judicial cooperation in the EU in criminal matters, so far based on procedures involving two judicial authorities in the issuing and executing State respectively. This important aspect of the e-evidence framework entails a fundamentally different approach that demonstrates the (need for) development of the EU law traditional concepts in order to respond to the new challenges with adequate mechanisms. From the perspective of the proposed e-evidence framework, the scope of article 82 (1) TFEU requires further clarification from CJEU or an amendment (albeit difficult). Reliant on the principle of mutual trust, the debates surrounding the e-evidence framework reveal that in today’s European reality this principle is still an objective to be achieved. For as long as disparities in the standards and protections provided by MSs still exist, the way forward should include innovative mechanisms that allow for the control, improvement and maintenance of those standards within each MS as opposed to fostering lack of trust, prejudicial treatment and unjustifiable differentiation between MSs within the EU.

The e-evidence framework generally achieves what it sets out to do: i.e. to increase the effectiveness of cross-border access to e-evidence. The application of the same rules and procedures for access to all SPs will improve legal certainty and clarity both for SPs and LEAs which is currently lacking under the existing mechanisms of cooperation. In several aspects the framework serves as a model to be followed in the international arena. However, further improvements can be recommended:

  • There should be only an exceptional involvement of the enforcing MS as proposed by the Council, so that the framework does not replicate the existing judicial cooperation models.
  • The wording of Article 7a in the Council draft could be amended to allow for the enforcing MS to raise objections on behalf of any affected state.
  • Service Providers should maintain their reviewing powers of production and preservation orders, given the unique position they are in to understand the data. A productive dialogue and close cooperation between SPs and the issuing authorities should be promoted in the earliest stages.
  • The framework should specify the definition of e-evidence and should provide for its inadmissibility in cases of breaches of the requirements specified therein.
  • The data categories need to be better defined and brought in line with other EU and international legal instruments, as well as the jurisprudence of CJEU and ECtHR. The draft presented by EU Parliament is a positive step in that direction.
  • Judicial validation of orders issued by non-judicial authorities should be imperative for all types of data as a form of control and safeguard against abuse or overuse.
  • A classification of investigating authorities by means of a schedule in the proposed framework would help to better define the permitted activities within the scope of the Regulation.
  • A provision that clearly prohibits the production or use of e-evidence in cases contrary to the ne bis in idem principle should be included in the final draft.
  • The final instrument should adopt the approach proposed by the Commission regarding confidentiality and subject notification with an obligation for the issuing authority to inform the person whose content or transactional data are sought in all cases (even though delays should be permitted).
  • The right to exercise legal remedies should be extended to the enforcing MS and/or the MS of residence of the suspect.
  • There should be provisions that enable defendants or other parties in the criminal proceedings to access or request e-evidence. The accessibility of electronic data to the suspects / defendant’s lawyer should be ensured in order to assert their rights effectively.

If implemented, these recommendations would improve the e-evidence framework by ensuring a balance between effective criminal investigations/prosecutions and respect for fundamental rights. A balanced and principled approach should be at the core of any existing or forthcoming instruments concerning cross-border access to electronic information.

ICO Targets Companies for Seeking to Illegally Make Profit from the Current Public Health Emergency

Photo by Adomas Aleno

Dr. Alexandros Antoniou, Lecturer in Media Law, University of Essex

On 24 September and 8 October 2020, the Information Commissioner’s Office (ICO), the United Kingdom’s independent body established to uphold information rights, imposed fines on two companies for sending thousands of nuisance marketing texts and unlawful marketing emails at the height of the current pandemic.

In September 2020, Digital Growth Experts Limited (DGEL) was issued with a monetary penalty of GBP 60,000 in relation to a serious contravention of Regulations 22 and 23 of the Privacy and Electronic Communications (EC Directive) Regulations 2003 (PECR). The PECR provide for specific privacy rights in relation to electronic communications. They include rules on marketing calls, emails, texts and faxes; cookies (and similar technologies); keeping communications services secure; as well as on customer privacy in relation to traffic and location data, itemised billing, line identification, and directory listings. Under the 2003 Regulations, ICO has the power to impose a monetary penalty of up to GBP 500,000 on a data controller.

The Commissioner found that between 29 February and 30 April 2020, DGEL had transmitted 16,190 direct marketing texts promoting a hand sanitising product, which was claimed to be “effective against coronavirus”. The company came to the attention of the Commissioner after several complaints were received via the GSMA’s spam reporting tool (the GSMA is an organisation that represents the interests of mobile operators worldwide).

In the course of the investigation, DGEL was unable to provide sufficient evidence of valid consent (as required by PECR) for any of the messages delivered to subscribers over the relevant period. The company’s explanations for its practices and the means by which it had obtained the data used for its direct marketing were found to be “unclear and inconsistent”.

DGEL had also used data obtained via social media ads which purported to offer free samples of the product to individuals, to automatically opt them into receiving direct marketing without advising them that their data would be used for this purpose, and without giving them (at the point the data was collected) a simple way of refusing the use of their contact details for direct marketing.

In October 2020, ICO again took action against a London-based software design consultancy, Studios MG Limited (SMGL), which had sent spam emails selling face masks during the pandemic. The company was fined GBP 40,000 for having transmitted unsolicited communications by means of electronic mail for the purposes of direct marketing, contrary to Regulation 22 of PECR.

More specifically, on 30 April – in the midst of the pandemic – SMGL sent up to 9 000 unlawful marketing emails to people without their permission. SMGL did not hold any evidence of consent for the individuals it had engaged in its one-day direct marketing campaign. ICO held that SMGL’s campaign had been made possible by using “data which had been scraped from various vaguely defined sources”.

ICO’s examination also found that SMGL’s director had decided to buy face masks to sell on at a profit, despite the fact that the company bore no apparent relation to the supplying of personal protective equipment (PPE). Moreover, it was impossible in SMGL’s case to determine the total number of individuals whose privacy had been affected, as the company had deleted a database with key data evidencing the full extent of the volume of emails delivered.

During the pandemic, ICO has been investigating several companies as part of its efforts to protect people from exploitation by unlawful marketing-related data processing activities. The ICO Head of Investigations said in a statement that DGEL “played upon people’s concerns at a time of great public uncertainty, acting with a blatant disregard for the law, and all in order to feather its own pockets.” A hard line was also taken in relation to SMGL. The Head of Investigations stated that “nuisance emails are never welcome at any time, but especially when people may be feeling vulnerable or worried and their concerns heightened.”

This article first appeared on the IRIS Merlin database of the European Audiovisual Observatory and is reproduced here with permission and thanks. Read the original article here.

Human Rights Expert Receives Major Funding to Investigate Impact of Algorithms on Democracy

Photo by Ari He

An Essex human rights expert has been awarded major funding to look at the impact of Artificial Intelligence-assisted decision-making on individual development and the functioning of democracy.

Dr Daragh Murray, from the School of Law and Human Rights Centre, is among the latest wave of individuals to receive funding as part of UK Research and Innovation’s Future Leaders Fellowships scheme. Dr Murray has been awarded over £1 million for an initial period of four years, to examine the impact of Artificial Intelligence (AI) assisted decision-making in a range of areas.

Dr Daragh Murray said: “Governments around the world are already using AI to help make important decisions that affect us all. This data-driven approach can offer key benefits, but it also relies on the ever-increasing collection of data on all aspects of our personal and public lives, representing both a step change in the information the state holds on us all, and a transformation in how that information is used.

“I want to look at the unintended consequences of this level of surveillance – the impact on how individuals develop their identity and how democratic society flourishes. Will a chilling effect emerge that changes individual behaviour? And what might the impact of this be? Will the knowledge that our activities are tracked and then translated into government decisions affect how we, for example, develop our sexual identity or our political opinions? Will we all be pushed towards the status quo in fear of the consequences of standing out?

“Ultimately what will the effect of this be on the well-being of our democracy?”

The Future Leaders Fellowships scheme is designed to establish the careers of world-class research and innovation leaders across the UK.

Dr Murray’s project will be interdisciplinary, working across human rights law, sociology and philosophy.

Dr Murray said: “We will be looking at lived experience in the context of wider discussions about how individuals and societies flourish. The intention is to re-imagine the human rights framework to address this very 21st century problem.”

Dr Murray is currently a member of the Human Rights Big Data & Technology Project, based at the University of Essex Human Rights Centre, and the Open Source for Rights Project, based at the University of Swansea. He was co-author with Professor Pete Fussey of the independent report into the Metropolitan Police Service’s trial of live facial recognition, published in July 2019.

He is a recognised expert in the field of Digital Verification, using open source investigation techniques to verify evidence of human rights abuses. He founded Essex Digital Verification Unit (DVU) in 2016 and co-edited Digital Witness, the first textbook in the field, with Sam Dubberley and Alexa Koenig. In 2019, Essex DVU was recognised with a Times Higher Education Award for International Collaboration of the Year, for its role in Amnesty International’s Digital Verification Corps.

The Fellows appoint mentors. In addition to Essex mentors Professor Lorna McGregor and Professor Pete Fussey, Dr Murray will benefit from the involvement of a stellar group of global experts: Professor Yuval Shany, from the Hebrew University of Jerusalem, is Vice-Chair of the United Nations Human Rights Committee, and Deputy President of the Israel Democracy Institute; Professor Ashley Deeks is a Research Professor of Law at University of Virginia Law School, Director of the School’s National Security Law Center and a member of the State Department’s Advisory Committee on International Law; Professor Alexa Koenig is Executive Director of University of California Berkeley’s Human Rights Center and sits on a number of national and international bodies looking at the impact of technology, as well as the board of advisors for ARCHER, a UC Berkeley-established non-profit that “leverages technology to make data-driven investigations accessible, smarter and more scalable.”

Launching the latest round of Future Leaders Fellowships, UK Research and Innovation Chief Executive, Professor Dame Ottoline Leyser, said: “Future Leaders Fellowships provide researchers and innovators with freedom and support to drive forward transformative new ideas and the opportunity to learn from peers right across the country.

“The fellows announced today illustrate how the UK continues to support and attract talented researchers and innovators across every discipline to our universities and businesses, with the potential to deliver change that can be felt across society and the economy.”

This story originally appeared on the University of Essex news webpage and is reproduced here with permission and thanks.

ICO’s Age Appropriate Design Code of Practice Comes Into Effect

Photo by Igor Starkov

Dr. Alexandros Antoniou, Lecturer in Media Law, University of Essex

On 2 September 2020, the Information Commissioner’s Office (ICO), the United Kingdom’s independent body established to uphold information rights, formally issued its Age Appropriate Design Code of Practice which should be followed by online services to protect children’s privacy.

The Age Appropriate Design Code of Practice, the first of its kind, is a statutory code required under Section 123 of the Data Protection Act 2018 and aims to address the increasing “datafication” of children. The Code was first published on 12 August 2020 and, following completion of its parliamentary stages, it came into force on 2 September 2020. The Information Commissioner, Elizabeth Denham CBE, stated: “For all the benefits the digital economy can offer children, we are not currently creating a safe space for them to learn, explore and play. This statutory Code of Practice looks to change that, not by seeking to protect children from the digital world, but by protecting them within it.”

The Code’s primary focus is to set a benchmark for the appropriate protection of children’s personal data and provide default settings which ensure that children have the best possible access to online services whilst minimising data collection and use, by default. It sets out 15 standards on data collection and protection, and reflects a risk-based approach. Section 123(7) of the DPA 2018 defines “standards of age-appropriate design” as “such standards of age-appropriate design of such services as appear to the Commissioner to be desirable having regard to the best interests of children.” The 15 points of the Age Appropriate Design Code include a duty to conduct data protection impact assessments; transparency; policy and community standards; data sharing and minimisation; geolocation; parental controls; nudge techniques; and online tools, among others. For a brief overview of the standards laid out in the Code, see here. Due to the fact that different services will need to implement various technical solutions, the ICO acknowledges that these are not intended as technical standards, but as a bundle of technology-neutral design principles and practical privacy features.

These principles apply to any online products or services (including, for instance, educational websites, social media platforms, apps, online games, and connected toys with or without a screen) that process personal data and are likely to be used by children under 18 in the UK; therefore, they are not limited to services specifically aimed at children. The Code covers entities based in the UK as well as entities based outside of the UK if their services are provided to (or monitor) users based in the UK. Services provided on an indirect charging basis (for example, funded by advertising) also fall within its remit.

The ICO and the courts will take the Code into account in determining whether the GDPR and PECR requirements have been met for the purposes of enforcement action. Although the Code is now in effect, the industry has been given a 12-month implementation period to get up to speed and introduce suitable changes. After a year in force, the ICO will undertake a review of the Code and its effectiveness.

This article was first published in the 9th issue of IRIS Legal Observations of the European Audiovisual Observatory and is reproduced here with permission and thanks.

The Oxford Statement on International Law Protections Against Foreign Electoral Interference through Digital Means

Photo by Joshua Sortino

Dr. Antonio Coco, Lecturer in Law at the University of Essex, has co-drafted The Oxford Statement on International Law Protections Against Foreign Electoral Interference through Digital Means, which has been signed by 139 international lawyers so far.

The Statement is the third in a series — informally known as the “Oxford Process” — aiming to clarify the rules of international law applicable to cyber operations which threaten areas of pressing global concern.

The first Statement (May 2020) concerned the protection of the healthcare sector. The second Statement (July 2020) focused on the protection of vaccine research. The third and most recent one (October 2020) tackles foreign electoral interference, and can be read at EJIL:Talk!Opinio Juris and JustSecurity.

Reforming Cybercrime Legislations to Support Vulnerability Research: the UK Experience and Beyond

CODE BLUE (29-30 October 2020) is an international conference where the world’s top information security specialists gather to give cutting edge talks, and is a place for all participants to exchange information and interact beyond borders and languages. As technology and society move forward and IoT (Internet of Things) is becoming a reality, security is increasingly becoming an urgent issue. The Internet world also needs to gather researchers to collaborate and think together about ways to respond to emergency situations, and come up with possible solutions. CODE BLUE aims to be a place where international connections and communities form and grow, and will contribute to a better Internet world by connecting people through CODE (technology), beyond and across the BLUE (oceans).

This year, Dr Audrey Guinchard (Senior Lecturer in Law, University of Essex) gave a keynote on ‘Reforming cybercrime legislations to support vulnerability research: the UK experience and beyond’.

Cybercrime legislations – or hacking laws- tend to be notoriously broad, resting on a set of assumptions about what ‘unauthorised access’ means, assumptions which hardly match those of the technical or ethical fields. The result is that the offences of unauthorised access and misuse of tools have the potential to criminalise most aspects of legitimate vulnerability research (discovery, proof of concept, disclosure). Independent security researchers are notably at risk of criminal prosecution as they work, by definition, without vendors’ prior authorisation. 

The UK is a particular case in point, having drafted its original Computer Misuse Act 1990 in such a way that even switching a computer on can constitute unauthorised access. Further reforms in 2006 and 2015 have expanded even more the scope of the legislation by modifying or adding other offences as broad in scope as the original ones. While the UK is in that respect an outlier, the EU Directive 2013/40/EU on attacks against information systems as well as the Convention on cybercrime n.185 (which is de facto the international treaty) are not without their own weaknesses, despite serious and effective efforts to restrict the scope of criminal law and protect security researchers.

Prosecution guidelines or a memorandum of understanding between the security industry and prosecutorial authorities are a welcome step to avoid outlandish prosecution of security researchers, but I argue that they are not sufficient to protect them once a prosecution starts. Their motive (and the methods used) to improve security will not constitute a legal argument unless a public interest defence exists.

Hence, Audrey’s proposal to reform the cybercrime legislations (UK, EU and the Convention) by incorporating a public interest defence to cybercrime offences, in particular to the ‘hacking’ offence (unauthorised access). Momentum is certainly gathering in the UK. The Criminal Law Reform Now network (CLRNN) has now released a comprehensive study of the UK Computer Misuse Act with a series of recommendations. It is time to make cybercrime legislations fit for the 21st Century, to borrow the slogan of a significant part of the security industry in the UK endorsing the report and the reform.

To read some of Dr Guinchard’s research papers which formed the background of this research, please see here and here.

Internet Safety Expert Recognised with OBE

Photo by Rami Al-zayat

An Essex legal expert has been recognised in the Queen’s Birthday Honours for her work on internet safety.

Professor Lorna Woods, from our School of Law, has been working since 2017 with William Perrin of the Carnegie UK Trust to develop a workable solution to ‘online harms’, a term that covers a range of internet safety issues. Professor Woods and Mr Perrin are to both receive OBEs.

Professor Woods said: “I am delighted, if a little surprised, by this honour. I’d like to thank Will, of course, but also Maeve Welsh and everyone at the Carnegie UK Trust – without their support, we would not have been able to develop our approach further or undertake the vital, ongoing engagement with those working in this area.

“Recent events have raised new concerns about the role of social media. The need for a statutory duty of care, overseen by an independent regulator, is not going away. In fact, it is more urgent than ever. We look forward to publication of the promised Online Harms Bill, and its consideration in this parliament.”

In October 2017, Professor Woods and Mr Perrin sat down to review the just-published Green Paper on Internet Safety Strategy.

Near-daily stories of bullying, self-harm and extremism had created a febrile debate. The challenge? To reset the online world and reduce the risk of harm.

The pair agreed the government response was inadequate. Drawing on their experience of the sector, they consulted with a range of actors, researched models already in use and started to write.

Across seven co-authored blogs, completed between February and May 2018 (and subsequently collected into a report, with funding from The Carnegie UK Trust), they sought to shift the debate from ”publishing” and the removal of specific content, to harm prevention, developing a detailed plan involving a statutory duty of care, overseen by an independent regulator.

The duty of care approach re-casts social media as a series of “public or quasi-public spaces”.  In creating these spaces, the providers’ goal must be not maximising profit, or engagement, but user safety. The more vulnerable an audience, the greater the responsibility.

At a time of significant public concern, their research has been a game-changer, offering a workable solution, inspiring a national newspaper campaign, rallying civil society groups and influencing lawmakers, at home and abroad.

In December 2019, they published their own draft Online Harm Reduction Bill, to maintain momentum. The draft bill was endorsed by organisations including the NSPCC, 5Rights Foundation, The Institute for Strategic Dialogue and the Royal Society of Public Health.

In January 2020, the authors and the Carnegie UK Trust also supported Lord McNally in the preparation of a short paving Bill to require Ofcom to prepare for the introduction of an Online Harms Reduction Regulator. The paving Bill was introduced into the Lords on 14 January 2020 and is currently awaiting a second reading.

Four Essex graduates have also been recognised in this year’s Queen’s Birthday Honours:

  • Dr Philip Orumwense (MA Political Behaviour, 1991) will receive a CBE for public service. Philip was Commercial Director of IT at Highways England and is recognisesd for his work across the public sector.
  • Sir David Attenborough (Honorary Graduate), has received a GCMC for his services to broadcasting and conservation.
  • Miss Carrie Anne Philbin (BA History, 2002) has received an MBE for services to education, championing diversity and inclusion in computing.
  • Ms Clare Woodman (BA Government & Sociology, 1989) has received a CBE for services to finance in her role as Head of EMEA and CEO of Morgan Stanley & Co. International PLC.

This story originally appeared on the University of Essex news webpage and is reproduced here with permission and thanks.

When is Mass Surveillance Justified? The CJEU Clarifies the Law in Privacy International and Other Cases

Photo by Matthew Henry

Lorna Woods, Professor of Internet Law, University of Essex

Background

This case concerns the collection of bulk communications data (BCD) from network operators by the security and intelligence agencies (SIAs).  It formed part of an action brought by Privacy International challenging the SIAs’ acquisition, use, retention, disclosure, storage and deletion of bulk personal datasets (BPDs) and BCD which started in 2015 before the Investigatory Powers Tribunal (IPT). Privacy International’s claim is based on its understanding of the safeguards required by the Court of Justice in Tele2/Watson – a 2016 CJEU judgment on UK data retention law, discussed here.

In Tele2/Watson the Court of Justice held that any data retention obligation must be targeted and limited to what is strictly necessary in terms of the persons affected, the sorts of data retained and the length of retention.  It also suggested that access to retained data should be subject to prior review by an independent body and that parties affected should be informed of the processing (unless this would compromise the investigations); and that the data should be retained within the EU.  The authorities must take steps to protect against misuse of data and any unlawful access to them.  Privacy International argued that the safeguards provided by British law are insufficient. The British government claimed that the SIAs’ activities fell outside the scope of EU law and that the rules were compliant with Article 8 ECHR. It argued that providing the safeguards as required by Tele2/Watson would undermine the ability of the SIAs. The IPT referred two questions – but only in relation to BCD not BPD – to the Court of Justice.  This was the basis for the Court’s judgment handed down yesterday.

Questions in Issue

The two questions referred were:

  • whether the activities of the SIAs fall within the scope of EU law bearing in mind Art 4 TEU and Art 1(3) of Directive 2002/58 (ePrivacy Directive);
  • if the answer is that the situation falls within EU law, do any of the “Watson Requirements” (as above) (or any other requirements) apply?

The Court of Justice decided to deal with this case with two other cases that had been referred to it: Joined cases C-511/18 and C-512/18 La Quadrature du Net & Ors and Case C-520/18 Ordre des barreaux francophones et germanphone & Ors, which were also the subject of a separate judgment yesterday. The cases also dealt with the bulk collection of communications data but in addition the court in La Quadrature du Net also asked whether real-time measures for the collection of the traffic and location data of specified individuals, which, whilst affecting the rights and obligations of the providers of an electronic communications service, do not however require them to comply with a specific obligation to retain their data are permissible. It also asked whether the Charter required persons concerned by surveillance to be informed once such information is no longer liable to jeopardise the investigations being undertaken by the competent authorities, or may other existing procedural guarantees which ensure that there is a right to a remedy suffice?   Ordre des barreaux francophones et germanphone & Ors raised the question of whether a general obligation might be justified to identify perpetrators of secual abuse of minors. If national law has not usfficiently guaranteed human rights may the effects of that law be temporarily retained in the interests of certainty and to achieve the objectives set down in the law.

The Advocate General handed down separate opinions on each of the cases (see herehere and here) but all on the same day (15 January 2020) to similar effect, that:

  • the e-privacy directive (and EU law in general) applies in this situation because of the required co-operation of private parties;
  • limitations on the obligation to guarantee the confidentiality of communications must be interpreted narrowly and with regard to the rights in the EU Charter on Fundamental Rights;
  • the case law in Tele2/Watson (summarised above) should be upheld: general and indiscriminate retention of traffic and location data of all subscribers is an interference with the fundamental rights enshrined in the Charter but real-time collection of traffic and location data of individuals suspected of being connected to a specific terrorist threat could be permissible provided it down not impose a requirement on communications service providers to retain additional data beyond that which is required for billing/marketing purposes; and that the use of such data for purposes less serious than the fight against terrorism and serious crime was incompatible with EU law.

Note that there are two more cases pending Case C-746/18 H.K. v Prokurator (Opinion handed down by AG Pitruzzella 21 Jan 2020) as well as references from Germany from 2019 and Ireland from 2020. 

Summary of Judgment

Privacy International

In its Grand Chamber judgment, the Court confirmed that requirements on communications service providers to retain data fell within the scope of EU law and specifically the e-Privacy Directive. The Court argued that the exclusion in Article 1(3) e-Privacy Directive related to “activities of the State or of State authorities and are unrelated to fields in which individuals are active” (para 35, citing Case C-207/16 Ministerio Fiscal, discussed here, para 32), whereas Art 3 makes clear that it regulates the activities of communications service providers. As held in Ministerio Fiscal, the scope of that directive extends not only to a legislative measure that requires providers of electronic communications services to retain traffic data and location data, but also to a legislative measure requiring them to grant the competent national authorities access to that data.

The legislative measures, permissible as a derogation under Article 15, “necessarily involve the processing, by those providers, of the data and cannot, to the extent that they regulate the activities of those providers, be regarded as activities characteristic of States” (para 39). given the breadth of the meaning of ‘processing’ under the GDPR, the directions made under s 94 Telecommunications Act fall within the scope of the ePrivacy Directive. The Court re-affirmed (para 43) the approach of its Advocate General in this case (and in La Quadrature du Net) that ‘activities’ in the sense of Art 1(3) cannot be interpreted as covering legislative measures under the derogation provision; to hold otherwise would deprive article 15 of any effect (following reasoning in Tele2/Watson) and Article 4(2) TEU does not disturb that conclusion (despite the Court’s reasoning in the first PNR case (Cases C-317/04 and C-318/04, paras 56 to 59). For the e-Privacy Directive (by contrast to the former Data Protection Directive in issue in the PNR case), what is important is who does the processing; it is the communications providers. The Court took the opportunity to confirm that the GDPR should not be interpreted the same way as the Data Protection Directive but in parallel with the e-Privacy Directive.

As regards the second question, the Court re-stated the scope of s. 94 orders thus:

That data includes traffic data and location data, as well as information relating to the services used, pursuant to section 21(4) and (6) of the RIPA. That provision covers, inter alia, the data necessary to (i) identify the source and destination of a communication, (ii) determine the date, time, length and type of communication, (iii) identify the hardware used, and (iv) locate the terminal equipment and the communications. That data includes, inter alia, the name and address of the user, the telephone number of the person making the call and the number called by that person, the IP addresses of the source and addressee of the communication and the addresses of the websites visited.

Such a disclosure of data by transmission concerns all users of means of electronic communication, without its being specified whether that transmission must take place in real-time or subsequently. Once transmitted, that data is, according to the information set out in the request for a preliminary ruling, retained by the security and intelligence agencies and remains available to those agencies for the purposes of their activities, as with the other databases maintained by those agencies. In particular, the data thus acquired, which is subject to bulk automated processing and analysis, may be cross-checked with other databases containing different categories of bulk personal data or be disclosed outside those agencies and to third countries. Lastly, those operations do not require prior authorisation from a court or independent administrative authority and do not involve notifying the persons concerned in any way.

Paras 51-52

The Court stated that the purpose of the e-Privacy Directive was to protect users from threats to their privacy arising from new technologies. It ‘gave concrete expression to the rights enshrined in Articles 7 and 8 of the Charter’ (para 57) and the exceptions thereto under Article 15(1), ie necessary, appropriate and proportionate in the interests of purposes listed in Art 15(1): national security, defence and public security, and the prevention, investigation, detection and prosecution of criminal offences or of unauthorised use of the electronic communication system. The exceptions cannot permit this exception to become the rule (citing Tele2/Watson, but also the ruling in La Quadrature du Net). Restrictions must also comply with the Charter. This is the same whether the legislation requires retention of the transmission of data to third parties (citing EU-Canada PNR Agreement, discussed here, paras 122-123). Drawing on Schrems II, discussed here, the Court held:

any limitation on the exercise of fundamental rights must be provided for by law implies that the legal basis which permits the interference with those rights must itself define the scope of the limitation on the exercise of the right concerned.

Para 65

It also re-iterated that derogations from the protection of personal data any restriction on  confidentiality of communications and traffic data may apply only in so far as is strictly necessary and “by properly balancing the objective of general interest against the rights at issue’ (para 67). Proportionality also requires the legislation to lay down clear and precise rules governing the scope and application of the measure in question and imposing minimum safeguards, to protect effectively against the risk of abuse. The legislation must set down conditions for the application of the measures so as to restrict them to those ‘strictly necessary’; the legislation must be binding. Automated processing gives rise to greater risks. These considerations are the more pressing in the context of sensitive data.

The Court noted that the transmission of data to SIAs constituted a breach of confidentiality in a general and indiscriminate way and thus:

has the effect of making the exception to the obligation of principle to ensure the confidentiality of data the rule, whereas the system established by Directive 2002/58 requires that that exception remain an exception.

Para 69

it also constitutes an interference with Articles 7 and 8 of the Charter, no matter how the data are subsequently used. Re-iterating its approach in EU-Canada PNR Opinion, the Court stated that:

it does not matter whether the information in question relating to persons’ private lives is sensitive or whether the persons concerned have been inconvenienced in any way on account of that interference.

Para 70

Here, given the potential to create a personal profile of individuals the intrusions was particularly serious and “no less sensitive than the actual content of communications” (para 71). The court also emphasised the impact of the feeling of being under constant surveillance, following its reasoning in Digital Rights Ireland (discussed here) and Tele2/Watson. Such surveillance may have an impact on freedom of expression, especially where users are subject to professional secrecy rules or are whistleblowers. The Court also note that given the quantity of data in issue, their “mere retention” entails a risk of abuse and unlawful access (para 73).

The Court distinguished between ‘national security’ understood in the light of Article 4(2) TEU and ‘public security’ and matters within Article 15 ePrivacy Directive.  While measures safeguarding national security must still comply with Art 52(1) of the Charter, given the seriousness of threats comprised in ‘national security’ in principle the objective of safeguarding national security is capable of justifying more intrusive measures that those would could be justified by other objectives (cross referring to its reasoning in La Quadrature du Net). 

Even in relation to national security, the underlying national legislation must also lay down the substantive and procedural conditions governing use of the data and not just provide for access. National legislation must rely on objective criteria in order to define the circumstances and conditions under which the competent national authorities are to be granted access to the data at issue. Here, the national legislation requiring providers of electronic communications services to disclose traffic data and location data to the security and intelligence agencies by means of general and indiscriminate transmission exceeds the limits of what is strictly necessary and cannot be considered to be justified, within a democratic society even in the interests of protecting national security.

La Quadrature du Net/Ordre des barreaux francophones et germanophone

The Court’s approach to Article 15 and the sorts of activities in the service of which surveillance may be undertaken by contrast with Article 3(1) was, unsurprisingly, the same as can be seen in Privacy International, as was its approach to interpreting the directive – emphasising the confidentiality of communications as well as Articles 7 and 8 EU Charter. Again, the Court took the approach that the exception to communications confidentiality should not become the rule and that exceptions must be strictly necessary and proportionate to their objectives. Retention of communications data is a serious interference with fundamental rights – including freedom of expression. The retention of the data constitutes such an interference whether or not the data are sensitive or whether the user was inconvenienced.

In similar terms to Privacy International, the Court again came to the conclusion that the general and indiscriminate retention of data was impermissible under the Charter and Article 15. The Court also re-stated the limitations on derogating measures made under Art 15. The point of difference in this analysis is that the Court recognised the conflicting rights that might need to be reconciled – particularly with regard to crimes against minors and the State’s positive obligation to protect them. This does not mean that the limits as regards necessity and proportionality may be overlooked.

The Court then considered the meaning of national security – approaching the matter in the same terms as it did in Privacy International.  This higher threshold meant that neither the directive nor the Charter precludes recourse to an order requiring providers of electronic communications services to retain, generally and indiscriminately, traffic data and location data. This however is only so when the Member State concerned is facing a sufficiently serious threat to national security (which includes matters more serious than those listed in Art 15), a threat that is genuine and actual or foreseeable. In such a case retention can only be for a period of time limited to that which is strictly necessary. If any such order is to be renewed it must be for a specified length of time. The retained data must be protected by strict safeguards against the risk of abuse. The decision must be  subject  to effective  review by  an  independent body (court or administrative), whose  decision  is  binding, in  order  to  verify  that  such a situation exists and that the conditions and safeguards laid down are observed.

The Court observed that general and indiscriminate surveillance refers to that which covers virtually all the population. The Court recognised the duties of the State under positive obligations and the need to balance potentially conflicting rights. It then held that in situations such as those described at paras 135-6 of its judgment, that is those falling in Article 4(2) TEU, the e-Privacy Directive and the Charter do not preclude measures for targeted retention of traffic and location data. Such measures must be limited in time to what is strictly necessary, and focused on categories of persons identified on the basis of objective and non-discriminatory factors, or by using geographical criteria.  It then relied on similar reasoning in relation to the fight against crime and the protection of public safety.

Similarly, IP addresses may be retained in a general and indiscriminate manner subject to a requirement of strict necessity. Further, the directive also does not preclude the retention of data beyond statutory data retention periods when strictly necessary to shed light on serious criminal offences or attacks on national security, when the offences or attacks have already been established, or if their existence may reasonably be suspected.  Real-time data may also be used when it is limited to people in respect of whom there is a valid reason to suppose that they are involved in terrorist activities. Such use of data must be subject to prior review by an independent body to ensure that real-time collection is limited to what is strictly necessary. The Court notes that in urgent cases that the review should take place promptly (presumably rather than after the event).

Finally, a national court may not apply a provision of national law empowering it to limit the temporal effects of a declaration of illegality which declaration the national court must make in respect of national legislation due to incompatibility with the e-Privacy Directive, and evidence obtained illegally should not be relied on in court.

Comment

The common theme across the cases was the acceptability of the retention and analysis of communications data generally. The Court has re-iterated its general approach, unsurprisingly linking – as the Advocate General also did – between the Privacy International ruling and that in La Quadrature du Net. In its approach, the Court relied generously on its previous rulings, which demonstrates that there is quite a thick rope of cases, all to broadly the same effect. While the Court based its ruling on the ePrivacy directive (which is specific to communications and communications data), it also based its ruling more generally on Articles 7 and 8 of the Charter. It is noteworthy that the Court did not just refer to its case law on communications data but also to the Canada PNR opinion, underlining that there is a similar approach no matter the type of data in issue. The Court also relied on Schrems II, implicitly confirming aspects of its approach there and embedding that decision in its jurisprudence. The underlying concern in Schrems II was the same as here: that is, data collected by private actors are accessed by state actors. In sum, even in the interests of national security, general and indiscriminate surveillance does not satisfy the test of strict necessity and proportionality. While its general approach might be similar to what has gone before, there are still some points of interest and new ground covered.

The IPT seems to have been the only court amongst those making references that still has not accepted that the retention of data falls within the scope of the e-Privacy Directive, relying on the reasoning of the Court on the Data Protection Directive in relation to passenger name records in an early case.  In addition to re-establishing the well-trodden principles regarding the impact of requiring electronic service providers to retain data bringing the entire scheme within scope of the e-Privacy Directive, and different functions of Article 1(3) (scope of directive) and Art 15 (derogation from directive), the Court took the opportunity to say something about the scope of the GDPR, the successor legislation to the Data Protection Directive. In effect, the Court has stopped the line of reasoning found in that early PNR judgment – it cannot be used to determine the scope of the GDPR which should be understood in line with Art 1(3) of the e-Privacy Directive.

The Court has emphasised a couple of aspects of the legal regime surrounding surveillance that are worth a second look. Firstly, while the Court says nothing about the form of law on which a surveillance may be based, in its analysis of Article 52(1) Charter it does say that the same law must contain the constraints. The principle then has wider application than just communications data. This raises questions about forms of surveillance rolled out by the police based on broad common law powers, or – as in the recent Bridges decision – in a mix of legislation, common law and code. These sorts of surveillance – although in public – may also give rise to a feeling of being subject to constant surveillance, though the Court’s jurisprudence on video-surveillance under the GDPR has not yet grappled with this issue. It may be, however, that the Court would take a different view on the extent to which ‘private life’ would be engaged in such circumstances. It is also worth noting that the views of the independent body must be binding on the SIAs; this reiterates the point that in principle approval must be sought in advance.

The Court also made clear that the rights in issue are not just privacy and data protection; it specifically referred here to freedom of expression and flagged the distinctive of those under professional duties of confidentiality (doctors, lawyers) and whistleblowers. It did not, however, consider whether any infringement was justified in this context. The list of possible rights affected is not limited to freedom of expression: in Schrems II the Court highlighted the right to a remedy. It is not inconceivable that the right to association could also be affected. Presumably the same points of analysis apply – that general and indiscriminate monitoring cannot be justified even in the interests of national security. The Court also recognised, in La Quadrature du Net, the positive obligations on the State in relation to Article 3 and 8 ECHR and the corresponding article in the Charter – Articles 4 and 7. The balancing of these positive obligations provided the framework for the Court’s analysis of types of surveillance that did not immediately fall foul of its prohibition of of general and indiscriminate data retention. In this context, it might almost be said that the Court is reformulating public interest objectives (such as national security or the fight against sexual abuse of children) as positive obligations and thus bringing them in a rights balancing framework.

The Court’s reasoning in both cases also gave us some insight into the meaning of national security. It is distinct from and covers more serious issues that the objectives listed in Art 15. While this in principle seems to allow more intrusive measures to be justified, it seems that the Court has limited the circumstances of when it can be used. It does not overlap seemingly with those grounds in Article 15 e Privacy Directive. So, even might be argued reading this part of the judgments that serious crime cannot be blurred with national security. The devil will be in the detail here, a tricky one for any independent body to patrol – and in terms of permitted surveillance it is not clear what the consequences in practice would be.

The headline news, however, must be the ruling of the Court relating to measures that do not fall within the prohibition as general and indiscriminate measures. This on one level is not totally novel; it is implied, for example, in Tele2/Watson, para 106. The questions relate to what level of generality of surveillance would be permissible, and in relation to what sort of objective? Para 137 seems to limit targeted retention of communications data to matters of national security (including terrorism), but the Court then wheels out the same reasoning in relation to serious crime and public safety, and seems to envisage similar safeguards in both cases. This then means that the test of ‘strict necessity’ is doing a lot of work in distinguishing between the legitimate and illegitimate use of surveillance measures. The Court has historically not been particularly strong on what it requires of a necessity test – let alone one requiring strict necessity – in other cases involving the interference with Charter rights.

The final point relates to the procedural questions. The Court was clear that striking down incompatible law cannot have suspended effect. Yet, that is precisely what the English court did in Watson when allowing the UK government several months to get its house in order. The Court of Justice also held here that illegally obtained evidence cannot be used in court, relying on the need to ensure that the rights granted by EU law are effective.  While the status of EU law in the British courts may currently be uncertain on the face of it this might mean that convictions based on data between the handing down of Tele2/Watson, or at latest its application by the English courts, until the revision of the regime might be open to challenge whatever the domestic rules on evidence might say. Of course, even if we did not have to deal with the jurisprudential consequences of Brexit, the Court of Appeal, in its approach to Tele2/Watson ignored the aspects of the judgment directed at Tele2 referring court despite the fact that element of the judgment was an interpretation of EU law having general application, so it is to be assumed that still more would it ignore a ruling in a different case altogether.

This post first appeared on the EU Law Analysis blog and is reproduced here with permission and thanks.

Essex Expertise Informs Facial Recognition Decision

The expertise and leading-edge research of three Essex academics has informed a landmark judgment on police use of facial recognition.

On Tuesday 11 August, the Court of Appeal delivered its judgment in a case brought by civil liberties campaigner Ed Bridges and the campaigning organisation Liberty, challenging a previous decision in favour of South Wales Police.

Mr Bridges, who lives in Cardiff, argued that it was possible South Wales Police had captured an image of his face on two occasions, as a result of facial recognition technology being deployed.

He brought a claim for judicial review, arguing that South Wales Police’s approach to deployment was incompatible with the right to respect for private life under Article 8 of the European Convention on Human Rights, data protection legislation, and the Public Sector Equality Duty under section 149 of the Equality Act 2010.

Professor Pete Fussey, from the Department of Sociology and Professor Lorna Woods and Dr Daragh Murray, both from the School of Law, contributed to a ‘Friends of the Court’ submission by the Surveillance Camera Commissioner to the Bridges appeal.

In addition, an annex, detailing Professor Fussey and Dr Murray’s findings in relation to the Metropolitan Police Service, was attached to the Surveillance Camera Commissioner’s submission.

The Court of Appeal upheld the Bridges appeal on four of its five grounds.

Commenting on the judgment, Professor Pete Fussey said: “The Court’s findings in relation to the use of live facial recognition technology by South Wales Police are consistent with our findings regarding the Metropolitan Police Service, in particular that such deployments are not ‘in accordance with the law’, and that too much discretion is given to police in determining who should be placed on a watchlist. The Court of Appeal was entirely correct in concluding that facial recognition cannot be considered as equivalent to the use of CCTV. The use of advanced surveillance technologies like live facial recognition demands proper consideration and full parliamentary scrutiny.”

Dr Daragh Murray said: “The use of advanced surveillance technologies, like live facial recognition, represent a step change in police capability, with potentially significant consequences for the functioning of our democracy, in terms of how individuals develop and interact and how challenges to, or protests against, government policy evolve. The Court of Appeal’s findings today regarding South Wales Police are consistent with many of our own conclusions regarding the Metropolitan Police Service. This is an important decision, particularly the conclusion that deployments were not ‘in accordance with the law’. However, many issues remain to be addressed, including the broader societal impact of facial recognition. What is abundantly clear is that all police forces should pay greater attention to human rights law considerations before deciding to deploy new surveillance technologies.”

Professor Lorna Woods said: “The judgment in ruling that the police use of Automated Facial Recognition as it stands is unlawful is welcome, but it also highlights the problems arising from a system where new surveillance technologies can be deployed based on very general common law powers without adequate safeguards. New legislation on this topic is required, to address not only the proposed use of facial recognition technology, but police use of Artificial Intelligence generally.”

Professor Lorna Woods is Professor of Internet Law. She has extensive experience in the field of media policy and communications regulation, including social media and the Internet and developed, with Will Perrin, a social media duty of care, which has had significant influence on the direction on the UK Online Harms debate. Professor Woods is an established member of a broader network of advisors who support the Surveillance Camera Commissioner in his role.

Professor Pete Fussey and Dr Daragh Murray are co-authors of the independent report into the London Metropolitan Police Service’s trial of live facial recognition technology, published by the ERSC Human Rights, Big Data and Technology Project in July 2019. It remains the only fully independently-funded report into police use of live facial recognition technology in the UK.

South Wales Police said it would not be appealing the Court of Appeal judgment.

This story originally appeared on the University of Essex website and is reproduced on our blog with permission and thanks.

“You Were Only Supposed to Blow the Bloody Doors Off!”: Schrems II and External Transfers of Personal Data

Photo by Joshua Sortino

Prof. Lorna Woods, Professor of Internet Law, University of Essex

The Court of Justice today handed down the much anticipated ruling on the legality of standard contractual clauses (SCCs) as a mechanism to transfer personal data outside the European Union.  It forms part of Schrems’ campaign to challenge the ‘surveillance capitalism’ model on which many online businesses operate: there are other challenges to the behavioural advertising model ongoing.  While this case is clearly significant for SCCs and Facebook’s operations, there is a larger picture that involves the Court’s stance against mass (or undifferentiated) surveillance. This formed part of the background to Schrems I (Case C-362/14, discussed here), but has also been relevant in European jurisprudence on the retention of communications data. This then brings us to a third reason why this judgment may be significant. The UK, like the US, has a system for mass surveillance and once we come to the end of the year data controllers in the EU will need to think of the mechanisms to allow personal data to flow to the UK. The approach of the Court to mass surveillance in Schrems II is therefore an indicator of the approach to a similar question in relation to the UK in 2021.

Background

The General Data Protection Regulation provides that transfer of personal data may only take place on one of the bases set out in the GDPR. The destination state may, for example, have an ‘adequacy decision’ that means that the state in question ensures an adequate (roughly equivalent) level of protection to the ensured by the GDPR (Article 45 GDPR).  The original adequacy agreement in relation to the United States (safe harbour) was struck down in Schrems I because it failed to ensure that there was adequate protection on a number of grounds, some of which related to the safe harbour system itself, but some of which related to the law in the US, specifically that which allowed mass surveillance.  While the safe harbour was replaced by the Privacy Shield under Decision 2016/1250 on the Privacy Shield (Privacy Shield Decision) which improved some of the weaknesses as regards the operation of the mechanism itself, including the introduction of an ombusdman system, little if anything has changed in relation to surveillance.

Another mechanism for transfer of personal data outside the EU is that of SCCs, which are private agreements between the transferor (data controller) and transferee. Article 46(1) GDPR states that where there is no adequacy decision “a controller or processor may transfer personal data to a third country or an international organisation only if the controller or processor has provided appropriate safeguards, and on condition that enforceable data subject rights and effective legal remedies for data subjects are available”. Article 46(2) GDPR lists possible mechanisms including standard data protection clauses. The Commission has produced a model form of these agreements in Commission Decision 2010/87 (SCC Decision). 

Following the outcome of Schrems I, Schrems reformulated his complaint to the Irish Data Protection Commissioner (DPC) about data transfers arguing that the United States does not provide adequate protection as United States law requires Facebook Inc. to make the personal data transferred to it available to certain United States authorities, such as the National Security Agency (NSA) and the Federal Bureau of Investigation (FBI) and the data is used in a manner incompatible with the right to private life, and that therefore future transfers by Facebook should be suspended.  These transfers are currently carried out on the basis of SCCs as approved by the SCC Decision.  The DPC took the view that this complaint called into question the validity of that decision as well as the Privacy Shield Decision, which moved the issue back into the courts. The Irish High Court referred the question to the Court of Justice and it is the outcome in this ruling that we see today.

The Judgment

The Advocate General in his Opinion (discussed here) suggested to the Court that the SCC Decision was valid; the problem was the context in which it operated. He took the view that the Privacy Shield’s validity should be considered separately. Crucially, he held that data controllers need to determine the adequacy of protection in the destination state. This in practice is difficult; while a data controller might have some control over what the recipient does with the data (how processed, data security etc), it would have little control over the general legal environment. In any event, data controllers would be required to make specific country assessments on this, which could be challenged by dissatisfied data subjects.  The Court took a slightly different approach. It agreed with its Advocate General that the SCC Decision was valid, but it struck down the Privacy Shield.

The Court made a number of findings. The first relates to the scope of inquiry and to competence. Given that national security lies outside the GDPR (and outside EU competence), should questions about the processing of data for purposes of public security, defence and State security be outside the scope of the GDPR rules. Following its position in Schrems I, the Court (like its Advocate General) rejected this argument [para 83, 86, 88]: the transfers of personal data by an economic operators for commercial purposes, even if that personal data is then processed by the authorities of the destination state for national security reasons, remains within the GDPR framework. Exclusions from the regime should be interpreted narrowly (citing Jehovan todistajat (Case C-25/17), discussed here).

In determining the level of protection the GDPR requires, the Court re-iterated its stance from Schrems I and following the reasoning of its Advocate General in this case held that we are looking for a level of protection “essentially equivalent” to that in the EU- and bearing in mind that the GDPR is understood in the light of the EU Charter.  So not only must the terms of the SCCs themselves be taken into account but also the general legal environment in the destination State.  The Court summarised:

…. the assessment of the level of protection afforded in the context of such a transfer must, in particular, take into consideration both the contractual clauses agreed between the controller or processor established in the European Union and the recipient of the transfer established in the third country concerned and, as regards any access by the public authorities of that third country to the personal data transferred, the relevant aspects of the legal system of that third country, in particular those set out, in a non-exhaustive manner, in Article 45(2) of [the GDPR].

[Para 105]

The Court noted that the national supervisory authorities are responsible for monitoring compliance with EU rules, and may check compliance with the requirements of the GDPR (following on from the position under the DPD established in Schrems I), and the national regulatory authorities have significant investigative powers. Where the SCCs are not complied with – or cannot be complied with – the national regulatory authorities must suspend or prohibit transfers and the Commission’s competence to draft SCCs does not restrict the powers of national authorities to review compliance in any way.  In this the Court’s approach is broadly similar to that of the Advocate General.  As regards an adequacy decision, a valid adequacy decision is binding, until such time as it may be declared invalid; this does not stop individuals from being able to complain.

Applying the principles to the SCC Decision, the Court noted that the standards bind only the parties to the agreement. Consequently, although there are situations in which, depending on the law and practices in force in the third country concerned, the recipient of such a transfer is in a position to guarantee the necessary protection of the data solely on the basis of standard data protection clauses, there are others in which the content of those standard clauses might not constitute a sufficient means of ensuring, in practice, the effective protection of personal data transferred to the third country concerned [para 126].

Does this possibility mean that the SCC Decision is necessarily invalid? The Court held not. Unlike an adequacy agreement which necessarily relates to a particular place, the SCC decision does not. The SCCs therefore may require supplementing to deal with issues in individual cases. Moreover, the SCC Decision includes effective mechanisms that make it possible to ensure compliance with EU standards [para 137].  Specifically, the SCC Decision imposes an obligation on a data exporter and the recipient of the data to verify, prior to any transfer, whether that level of protection is respected  in the third  country  concerned. The recipient of the data must inform the data controller of any inability to comply with the SCCs, at which point the data controller is obliged to suspend transfers and/or terminate the contract. The SCC Decision is therefore valid; the implications of this in practice for this case were not drawn out. The Court in the end held that:

… unless there is a valid European Commission adequacy decision, the competent supervisory authority is required to suspend or prohibit a transfer of data to a third country pursuant to standard data protection clauses adopted by the Commission, if, in the view of that supervisory authority and in the light of all the circumstances of that transfer, those clauses are not or cannot be complied with in that third country and the protection of the data transferred that is required by EU law, in particular by Articles 45 and 46 of that regulation and by the Charter of Fundamental Rights, cannot be ensured by other means, where the controller or a processor has not itself suspended or put an end to the transfer [operative ground 3].

The existence of an adequacy decision is then key. Turning to the Privacy Shield Decision, the Court set the same analytical framework, emphasising the GDPR is understood in the light of the Charter and the rights to private life, to data protection and to an effective remedy. In assessing the decision, the Court noted that it awards primacy to the requirements of US national security, public interest and law enforcement, which the Court interpreted as condoning interference with the fundamental rights of persons whose data are transferred. In the view of the Court, access and use of personal data by US authorities are not limited in a way that is essentially equivalent to EU law – the surveillance programmes are not limited to what is strictly necessary and are disproportionate. Further, data subjects are not granted rights to take action before the courts against US authorities. The Ombudsperson mechanism, introduced by the Privacy Shield Decision as an improvement on the position under safe harbour, is insufficient. The Court therefore declared the Privacy Shield invalid.

Comment

The most obvious consequence of this ruling is that of how data transfers to the US can continue? The Privacy Shield is no more, and its demise has consequences for the operations of SCCs in practice. Given the weaknesses in the general legal system from the perspective of the Court of Justice, weaknesses over which the data controller/exporter can have little control, how can the requirements to individually assess adequacy be satisfied?  Are there, however, any other mechanism on which data transfers could be carried out?

In this context, we should note how the Court has interpreted the provisions of Chapter V to create a common baseline for standards, despite differences in wording between Arts 45 and 46 GDPR. Article 45 deals with adequacy decisions and it requires that there is “an adequate level of protection”; Article 45(2) then lists elements to be taken into account – notably respect for the rule of law and human rights and “relevant legislation, both general and sectoral, including concerning public security, defence, national security and criminal law and the access of public authorities to personal data”. It was this provision that was interpreted in Schrems I to require a level of protection that is ‘essentially equivalent’. Article 46(1) – which is relevant to the other mechanisms by which transfers may take place, including agreements between public authorities and binding corporate rules as well as SCCs – says something different. Article 46(1) requires “appropriate safeguards” and “enforceable data subject rights and effective legal remedies for data subject”. This is then not necessarily the same – at least in terms of simple wording – as Article 45(1). The Court however has read Articles 46 and 45 together so as to ensure that, as required by Article 44, data subjects’ rights are not undermined. This brings the essential equivalence test across to Article 46 [see para 96] and not just SCCs, but all the other mechanisms for data transfer listed in Art 46(2).  More specifically the factors to be taken into account when considering whether there are appropriate safeguards match the list set out in Article 45(2). 

The Court also emphasised that the requirements of the GDPR must be understood in the light of the EU Charter as interpreted by the Court itself [para 100].  In this context, the backdrop of the Court’s approach to fundamental rights – specifically the right to private life in Art 7 EU Charter – is significant. The Court in a number of cases involving the bulk retention of communications and location data by telecommunications operators so that those data could be accessed by law enforcement and intelligence agencies found those requirements – because they applied in an undifferentiated manner irrespective of suspicion across the population – to be disproportionate (Digital Rights Ireland and Others, Cases C-293/12 and C-594/12; Tele2/Watson (Cases C-203/15 and C-698/15), discussed here and here). The Court has also criticised the use of passenger name records (PNR) data (Opinion 1/15 (EU-Canada PNR Agreement, discussed here)) and particular the use of automated processing. The Court in its review of the facts referred to a number of surveillance programmes and that the referring court had found that these were not ‘essentially equivalent’ to the standards guaranteed by Article 7 and 8 EU Charter. This would seemingly cause a problem not just for the adequacy agreement, but for an operator seeking to rely on SCCs – or on any other mechanism listed in Art 46(2).

This brings to the forefront Article 49 GDPR, referred to by the Court as filling any ‘vacuum’ that results from its judgment, which allows derogations for external transfers in specific situations, notably that the data subject has consented or that the transfer is necessary for the performance of a contract. While these might at first glance give some comfort to data controllers a couple of words of caution should be noted. First, these reflect the grounds for lawful processing and should be interpreted accordingly. Notably ‘explicit consent’ is a high bar – and all consent must be freely given, specific informed and unambiguous – and it should be linked to a specific processing purpose (on consent generally, see EDPB Guidelines).  The ground that something is necessary for a contract does not cover all actions related to that contract – in general a rather narrow approach might be anticipated (see EDPB Guidance). 

The final point relates to the UK. The UK perhaps infamously – also has an extensive surveillance regime which has been the subject of references to the Court of Justice (as well as a number of cases before the European Court of Human Rights). Crucially, the regime does have some oversight and there is an independent tribunal which has a relaxed approach to standing. Nonetheless, bulk collection of data is permissible under the Investigatory Powers Act, and it is an open question whether the Court of Justice would accept that this is necessary or proportionate, despite the changes brought in since the Tele2/Watson ruling on the communications data rules. Further, the UK has entered into some data sharing agreements with the US which have given rise to disquiet in some parts of the EU institutions. Whilst a member of the EU it benefitted in terms of data flows from not having to prove the adequacy of its safeguards. From 2021 that will change.  In the light of the approach of the Court of Justice, which can be seen as reemphasising and embedding its stance on surveillance, obtaining an adequacy agreement may not be so easy for the UK and given the similarity in approach underpinning Articles 45 and 46 GDPR, other mechanisms for data flow may also run into problems if this is the case. For now, the jury is out.

This post originally appeared on the EU Law Analysis Blog and is reproduced here with permission and thanks.