Sexual Misconduct Claims against Conservative MP: What Stops the Media from Naming Rape Suspects?

Photo by Joe

Dr. Alexandros Antoniou, Lecturer in Media Law, University of Essex

The allegations of sexual misconduct against an unnamed Conservative MP have received significant media coverage lately. The Sunday Times reported that the ex-minister was taken into custody on Saturday 1 August 2020 after a former parliamentary employee accused them of rape, sexual assault and coercive control. The MP has not been named publicly so far. But what stops the media from naming rape suspects? There are several aspects of media law which are relevant to this case.

To start with, the Tory MP remains anonymous partly because of recent developments in the law of privacy. Cliff Richard’s legal action against the BBC in 2018 established that suspects of law enforcement investigations enjoy ‘a reasonable expectation of privacy’ up to the point of charge. This general principle was endorsed by the Court of Appeal in the subsequent case of ZXC v Bloomberg LP in May 2020. Giving lead judgment in this case, Lord Justice Simon stated:

[…] those who have simply come under suspicion by an organ of the state have, in general, a reasonable and objectively founded expectation of privacy in relation to that fact and an expressed basis for that suspicion. The suspicion may ultimately be shown to be well-founded or ill-founded, but until that point the law should recognise the human characteristic to assume the worst (that there is no smoke without fire); and to overlook the fundamental legal principle that those who are accused of an offence are deemed to be innocent until they are proven guilty.

[para. 82]

This does not necessarily mean that the media cannot report on criminal investigations. Such investigations can only lawfully be reported where there are countervailing public interest grounds to outweigh the suspect’s privacy interests and justify disclosure of their name (e.g. where the individual under investigation is a political figure). Different media organisations’ approach to this balancing exercise may, however, vary; hence, some media outlets may decide to name the suspect more quickly than others.

Furthermore, an alleged victim of a sexual offence enjoys an automatic right to lifelong anonymity under section 1 of the Sexual Offences (Amendment) Act 1992 and should not be identified in a written publication available to the public or a relevant programme for reception in England and Wales. The anonymity applies from the time an allegation is made by the alleged victim or anyone else. Section 5 of the 1992 Act makes it an offence to breach these provisions. The individual concerned may waive their right to anonymity if specific requirements are fulfilled and a court can lift the anonymity in certain circumstances, but this happens only rarely. One practical implication of these statutory provisions is that the media must be mindful of the potential for ‘jigsaw’ identification, i.e. piecing together different bits of information that create a more complete picture of an individual whose identity should be concealed. This means that the media must limit the publication of any matter ‘likely to lead’ to the complainant’s identification and as a result, care is needed with detail.

There could also be libel risks if, prior to any charge, a suggestion is published that an identified suspect may be guilty of a crime. A media report which includes the suspect’s name may allow that individual to successfully sue the publisher for defamation if the investigation does not lead to a prosecution. The media can safely publish the name of a person under investigation if the name is officially supplied by a spokesperson for a governmental agency, e.g. the police. This is because the report will be protected by the defence of qualified privilege in defamation law. It is anticipated that most media outlets will wait until the individual concerned has been named by the police. Finally, the publication of details which turn out to be incorrect could result in a conviction for contempt of court if a judge thinks that the material published created ‘a substantial risk of serious prejudice or impediment’ to the legal proceedings.

New Socio-Legal Research on Harmful Gender Stereotypes in Advertising

Photo by Joshua Earle

Dr. Alexandros Antoniou and Dr. Dimitris Akrivos, Lecturers in Media Law, University of Essex

A year after the introduction of the UK Advertising Standards Authority’s (ASA) new rule on gender stereotyping, a new study evaluates the regulator’s approach to depictions of harmful gender stereotypes in advertisements.

Dr Alexandros Antoniou and Dr Dimitris Akrivos from the School of Law are the authors of ‘Gender portrayals in advertising: stereotypes, inclusive marketing and regulation’. Their study, which was recently published in the Journal of Media Law, a leading journal in the field, offers an in-depth socio-legal analysis of the ASA’s modern practice which systematises for the first time the regulator’s rulings in the field of gender stereotyping.

For a long time, academic research has highlighted the impact gender stereotypical advertising images can have on people’s aspirations, professional performance and mental well-being. In response to long-standing concerns around the matter, the ASA introduced in June 2019 a new advertising rule and guidance into its harm and offensiveness framework. The new rule, which came into effect on 14 June 2019, states: ‘Advertisements must not include gender stereotypes that are likely to cause harm, or serious or widespread offence’. Academic discussion has not until now queried whether the actions taken by the ASA constitute a satisfactory response to the problem.

Dr. Antoniou and Dr. Akrivos had previously analysed on the International Forum for Responsible Media Blog the first ads to be banned under the new ASA gender-stereotyping rules, including the Volkswagen’s ad, which promoted the manufacturer’s eGolf model and the TV commercial promoting the Philadelphia cream cheese.

Their new article brings a new perspective in the ASA’s approach by paying close attention to the complex structure of gender stereotypes and the interaction between their multiple components. More specifically, Dr Antoniou and Dr Akrivos’ research looks at how the ASA has dealt with different forms of gender stereotyping, including sexualisation and objectification; body image; gender roles, behaviours and characteristics; and the ridiculing of those who do not conform to gender norms.

The authors argue that, although the ASA’s new rule and guidelines constitute a step in the right direction, they represent a missed opportunity to take bolder action against ads that objectify or inappropriately sexualise individuals. Dr Antoniou and Dr Akrivos stated: “the new ASA guiding principles need to be revisited in order to go beyond the traditional male/female binary”. They recommend that the new guidance on gender representation in marketing communications needs to reflect the multi-faceted nature and fluidity of modern gender identities. “We propose the introduction of a new concept requiring advertisers to give ‘due weight and consideration’ to the diversity of modern masculinities and femininities”.

The University of Essex’s press release on the study can be found here. The research also featured in an article on the global marketing magazine Campaign and a piece on the LGBTQ magazine GScene.

Tackling Online Hate Speech in France – Quo Vadis?

The main entrance of the French Constitutional Council, Palais-Royal, Paris, France (source: Wikipedia Commons)

Dr Clotilde Pegorier, Lecturer in Law, University of Essex

Note: the hyperlinks to the relevant webpages are in their original languages – French and German.

On 13 May 2020, the French Parliament passed a new bill geared to combatting online hate speech. Disputed from the outset, the bill was, on 18 June 2020, subsequently ruled by the Conseil constitutionnel – the Court that reviews legislation to ensure compliance with the French Constitution – to be partially, even largely, unconstitutional. Indeed, the ruling effectively quashed seven of the bill’s provisions, and made substantial amendments to several others, notably paragraphs I and II of Article 1. Small wonder that Bruno Retailleau, Vendéan Senator and president of ‘Les Républicains’, spoke of the ruling – in fitting French manner – as having “totally decapitated” the bill.

What is afoot here? And what does this mean for the French government’s efforts to regulate online content?

What was in the Original Bill?

Before reviewing the Conseil constitutionnel ruling, let us first consider the rationale and content of the original bill.

Named for its main sponsor, MP Laetitia Avia of Emmanuel Macron’s ‘La République en Marche’ party, the law was largely inspired by the German Netzwerkdurchsetzungsgesetz (NetzDG), which came into effect in October 2018 and which foresees significant fines for online platforms that do not remove “manifestly illicit” content within a stipulated timeframe of 24 hours after it being reported.

The ‘Loi Avia’ was designed in the light of the NetzDG to update the current legislative framework supplied by the Law on Confidence in the Digital Economy (Loi pour la confiance dans l’économie numérique, LCEN) of 2004, notably by reinforcing the contribution of digital providers and platforms to the struggle against online hate. Its central provision, set out in Article 1, was to command online platforms falling under the purview of the bill “to render inaccessible, within 24 hours of notification by one or more persons, any content manifestly constituting of the offences” stipulated in this and other laws – that is, content that violates France’s hate speech provisions. According to the bill, platforms were also obliged to adopt “appropriate resources to prevent the redistribution” of content deemed manifestly illegal (article 2). The scope of the law was to extend to “operators of online platforms […] offering an online public communication service based on connecting multiple parties for the purpose of sharing public content or based on classifying or referencing content by means of computer algorithms, which is offered or placed online by third parties, where this activity on French territory exceeds a threshold, determined by decree” (article 1). Where, precisely, this threshold lay was to be decided subsequently. Notably, the bill covered social media platforms and search engines, but not internet service providers. Failure to comply with the new law would incur a criminal fine of up to 250’000 euros for individuals and 1’250’000 euros for corporations. In addition, an administrative penalty of up to 20 million euros or 4% of a company’s global annual turnover could be imposed for “serious and recurrent” failures.

The Process of Adoption

It is worth reflecting for a moment on the particular process by which the bill was first adopted. In May 2019, the Government decided to apply the ‘procedure accelerée’ (accelerated procedure) foreseen in Article 45 of the French Constitution. This decrees that, after a reading by each of the two chambers of Parliament – the Assemblée nationale (roughly equivalent to the House of Commons) and the Sénat (House of Lords) – and in the case of no agreement being reached on a common text, the Prime Minister or the Presidents of the two Houses can convene a joint committee, comprising equal members from each House, to propose a compromise text on debated issues. This is what occurred here: the two parliamentary chambers could not find an accord on the text of the bill and a commission was constituted. This failed, however, to yield a compromise text acceptable to both sides, and so the ‘normal’ legislative procedure resumed – the original text as amended and adopted by the Sénat went back to the Assemblée nationale, which made its own modifications, and this new text was then returned to the Sénat for further amendment.

As the process stalled in this back and forth between the chambers, the Government eventually decided to give a final reading before the Assemblée nationale – again in line with constitutional provisions – and the bill was adopted in May 2020. All of which is to say that, as a consequence of such wrangling, the bill was passed only by one of the two parliamentary chambers, albeit it the more ‘democratic’ one. Given the nature of the bill, and the current “state of health emergency” in place in France, one can readily question how well- or ill-advised this move was on the part of the Government. What seems clear, though, is that it lent an air of almost inevitability to subsequent challenge and dispute. Following adoption, on 18 May, 60 members of the senate submitted an appeal to the Conseil constitutionnel to contest the constitutionality of the bill.

The Conseil Constitutionnel Ruling

The arguments put forward by the challengers to the bill – and those upheld by the Conseil constitutionnel – were, unsurprisingly, connected to the matter of legitimate and illegitimate restrictions to freedom of expression. Unsurprising, as these concerns were already at the forefront of jurisprudential and public debates and discussions during the bill’s drafting.

Citing the 1789 “Declaration of the Rights of Man and of the Citizen”, the Conseil constitutionnel determined in its ruling that both Paragraph 1 (demanding the removal of content relating to terrorism and child pornography within the hour) and Paragraph 2 (requiring the removal of hateful content within 24 hours) of Article 1 constitute an “infringement on the exercise of freedom of expression and communication that is unnecessary, inappropriate and disproportionate”. The follow-through from this determination on Article 1 was to render an entire raft of subsequent provisions unconstitutional. The removal window in both scenarios was, the council held, “particularly brief”, and the severity of the proposed sanctions would “only incite online platform operators to remove flagged content, whether obviously unlawful or not,” especially in the absence of specific cause that exonerates from responsibility. With no judicial intervention foreseen, it would be for platform administrators (as private actors) to determine whether or not particular content is unlawful – a situation that would, in the verdict of the council, likely encourage an excessively censorious approach and the removal of materials that are in fact lawful.

What remains of the bill after the ruling is modest. Perhaps most notable is the acceptance of a proposal to create an official online hate speech watchdog (article 16). While by no means inconsequential, this and other minor provisions represent a meagre return when set in the context of the bill’s ambitious aim to overhaul the legislative landscape for dealing with online hate speech.

And Now? 

So where does this leave the government’s efforts to police online content? Clearly, this is a substantial setback. While the bill was officially enacted, the ruling of the council stripped it of almost all meaningful impact.  If not quite in tatters, the government’s strategy is tarnished, and there is obvious need for a rethink. Not that there is any sign of submission – in a statement following the ruling, Laetitia Avia vowed not to give up the fight, and asserted that the judgement offered a “roadmap to improve a plan that we knew to be unprecedented and therefore perfectible.” Thus the show will go on. But the implications of the ruling should not be downplayed. These may also extend beyond national borders – the government had hoped that the new bill might provide a template for the European Commission’s Digital Services Act, scheduled to be put forward by the end of the year. The Commission said that it “took note” of the council’s ruling.  

Both of itself and as part of France’s extended efforts to regulate speech across diverse contexts, this recent chapter is variously revealing of the idiosyncrasies of our jurisprudence and constitutional arrangements, of the relationship the French state maintains with its citizens, and of its approach to balancing free speech with anti-discrimination concerns and the fight against harmful content  (which differs markedly from the US, for example). It has also proved another flashpoint in ongoing debates on possible limits to freedom of expression and the dilemma of hate speech. That this is a fraught and thorny issue barely needs restating. Nor does its importance. The question of where to set the line between permissible and impermissible speech is contentious, daunting and potentially confusing – most reflective minds would probably admit to being pulled in different directions at different times and in different contexts. Just as we bristle at attempts to muzzle freedom of expression, so we do at the harms caused by hateful speech. Marking that boundary was, is, and will likely always remain, a tightrope walk. How the French government opts to move forward in the coming months will be an interesting watch.  

LONDON LIVE sanctioned by Ofcom for broadcasting ‘potentially harmful’ interview on COVID-19

Image by Pexels

Dr. Alexandros Antoniou, Lecturer in Media Law, University of Essex

On 20 April 2020, the UK communications regulator Ofcom ruled that ESTV Ltd had breached its Broadcasting Code by airing an interview on the Coronavirus pandemic which risked causing “significant harm to viewers.”

ESTV Ltd, the licensee, is the owner of the local TV channel London Live, which serves the London area. On 8 April 2020, London Live broadcast an 80-minute interview with the former footballer and sports broadcaster David Icke, who was introduced by the presenter Brian Rose at the start of the programme as “a writer and public speaker known since the 1990s as a professional conspiracy theorist.” At the time of the broadcast, it was estimated that approximately 1.4 million people had been infected globally and the UK Government had introduced its lockdown policy to curb the spread of the virus.

Given the global Coronavirus crisis, the regulator expressed particular concern over the broadcast of Icke’s opinions which “cast doubt on the motives behind the official health advice aimed at reducing the spread of the virus.” The interviewee repeatedly suggested in the programme that the measures taken by the UK Government, other national governments and international health bodies such as the WHO were being implemented to further the malevolent ambitions of a “clandestine cult,” rather than to protect public health. While not expressly mentioning 5G technology, Icke referred, among other things, to an “electromagnetic, technologically generated soup of radiation toxicity” which, he claimed, had compromised the immune system of elderly people. Icke also expressed doubts over the use of vaccines (which are widely accepted by scientific communities as important mechanisms in controlling infectious disease outbreaks and part of a long-term solution to COVID-19), describing them as a “tidal wave of toxic shite” and any decision to make them mandatory as a form of “fascism.”

ESTV Ltd acknowledged that the programme included “controversial” and “unorthodox” material that challenged mainstream thinking, but considered it to be an exploration of Icke’s “extraordinary” views about the origins of the virus and governments’ responses within the limits of Article 10 of the European Convention on Human Rights. The regulator stated that the licensee was not, in principle, prohibited from broadcasting opinions which diverged from, or challenged official authorities on public health information and that Icke had a right to hold and express these views. However, Ofcom queried whether in the current unprecedented circumstances the programme had ensured that members of the public were “adequately protected” from the inclusion of potentially harmful material in compliance with Rule 2.1 of the Broadcasting Code.

The regulator stated that some viewers might well have expected that Icke’s opinions would not necessarily be scientifically or otherwise empirically supported, but they had also been likely to be “particularly vulnerable” during a global public health emergency. The extended nature of the interview, its sensitive subject matter, the severity of the situation and the degree of challenge (or the inclusion of opposing views) were factors that weighed significantly in the decision-making. Ofcom found that for some 80 minutes, ESTV Ltd had provided David Icke with a platform to set out highly controversial and unsubstantiated claims (which the licensee itself considered “may be absurd”) with minimal challenge within the programme. Moreover, the impact of the limited challenge that was present was minimised by the presenter’s final comments to the interviewee: after shaking hands, Brian Rose said that David Icke had “amazing knowledge and amazing perspectives about what’s going on here.” The regulator concluded that the licensee had failed to adequately protect viewers from potential harm and considered the breach of Rule 2.1 to be serious.

Ofcom directed ESTV Ltd to broadcast a summary of its ruling. Its Sanctions Panel will consider the matter further. Ofcom’s decision was delivered within just two weeks, as the regulator prioritises cases linked to Coronavirus whereby programmes may have helped spread misinformation or included material of a misleading nature about the illness and public policy in relation to it.

This article was first published in IRIS Legal Observations of the European Audiovisual Observatory and is reproduced here with permission and thanks.

TV Cameras To Be Allowed To Film in Crown Court in England and Wales

Photo by Julian Schiemann

Dr. Alexandros Antoniou, Lecturer in Media Law, University of Essex

On 16 January 2020, the Ministry of Justice announced plans to allow for the first time in England and Wales recordings and broadcasts from the Crown Court with the aim of increasing public engagement with the justice system.

Filming is already permitted in the Supreme Court and has been since it was set up in 2009 (although this is carried out by the court itself) and the television broadcasting of Court of Appeal proceedings has been possible in specified circumstances since 2013 under the Court of Appeal (Recording and Broadcasting) Order 2013. The Crown Court (Recording and Broadcasting) Order 2020 will extend this to the Crown Court (which deals with serious criminal cases like murder and sexual offences) and allow cameras to broadcast the sentencing remarks of High Court and Senior Circuit judges when sitting in open court. No other court user will be filmed, however, and normal reporting restrictions will continue to apply to protect victims or witnesses involved in the case.

The policy aim of this legislative move is to ensure that courts “remain open and transparent and allow people to see justice being delivered to the most serious of offenders.” The legislation has been welcomed by broadcasters such as ITN, Sky and the BBC, and follows a not-for-broadcast pilot run between July 2016 and February 2017 to enable assessment of the practical and technical challenges of filming in the Crown Court.

The 2020 Order prescribes the conditions to be satisfied for the visual and sound recording and broadcast of sentencing remarks in the Crown Court. When these conditions are satisfied, section 41 of the Criminal Justice Act 1925 (which bans photography and filming in courts and their precincts) and section 9 of the Contempt of Court Act 1981 (which makes it illegal to record sound in court and broadcast any audio-recording of court proceedings except with the permission of the court) will not apply.

The legislation comes with safeguards. Whole trials will not be televised and filming will be restricted to the judge alone who will be seen on camera as he or she delivers their sentencing remarks. Moreover, recording or live broadcast can only be carried out by persons who have been given specific permission by the Lord Chancellor. Filming will also be appropriately edited before leaving the courtroom. Where filming is to be broadcast live, there will be a short delay before broadcast to avoid breaches of reporting restrictions or any other error. Whilst concerns may be expressed that particular sections of lengthy remarks may be broadcast out of context to create a false impression, the full sentencing remarks of any case broadcast will be hosted on a website to which the public will have access. Her Majesty’s Courts and Tribunals Service will retain copyright of the footage and will be able to access any footage taken by broadcasters.

This post first appeared on the legal database IRIS Merlin and is reproduced here with permission and thanks.

Community Radio Station Found in Breach of Ofcom’s Offensiveness Rules

Dr. Alexandros Antoniou, Lecturer in Media Law, University of Essex

On 16 December 2019, Ofcom, the United Kingdom’s communications regulator, found that Radio Caroline had breached Section Two of its Code, which outlines standards for broadcast content so as to provide members of the public with adequate protection from harmful and offensive material.

Radio Caroline, which was founded in 1964 and broadcast from international waters, had been rendered an illegal (pirate) station by the Marine Broadcasting Offences Act 1967, but 50 years later, in June 2017, Caroline was granted a community radio licence by Ofcom. Community radio services are provided on a not-for-profit basis and focus on the delivery of “specific social benefits to a particular geographical community.”

Radio Caroline AM Broadcasting Ltd now holds the licence for Radio Caroline. The station was given the medium wave frequency of 648kHz (which was once used by the BBC World Service) and now broadcasts in Suffolk and northern parts of Essex. It plays a wide range of album music from the 1960s to the present day, with an audience consisting primarily of individuals aged 45 and over.

On 13 September 2019, Ofcom received a complaint concerning Caroline’s Top Fifteens programme which is broadcast every weekday morning from 9 a.m. to 10 a.m. In particular, the complaint related to the broadcast of the English rock band Radiohead’s track “Creep”, which contained three instances of the word “fucking”.

Rule 2.3 of the Ofcom Broadcasting Code stipulates that broadcasters, in applying generally accepted standards, must ensure that potentially offensive language is justified by the context. Context includes, but is not limited to, the service on which the material is broadcast, the time of broadcast as well as the size, composition and likely audience expectations. The same rule also states that “appropriate information should also be broadcast where it would assist in avoiding or minimising offence”.

The licensee acknowledged that there was “no justification for the use of explicit language”. It also stated that it would not have “knowingly play[ed] such a track”, which was aired due to a “simple error” between two volunteers who shared the tasks of scheduling the tracks and voicing links. In order to mitigate the risk of recurrence of this problem, Radio Caroline responded that they were planning to devise a single database of music so that tracks would not be selected from external sources. Moreover, listener suggestions for tracks would be examined by a staff member and only added to the available list if the content was deemed “acceptable”. The licensee further explained that it had not broadcast an apology “because the problem was not identified until it was brought to [its] notice many days later”.

Ofcom noted the steps Radio Caroline said it was taking and the fact that the language had been broadcast live in error. However, bearing in mind its research, which indicates that the word “fuck” is considered by audiences to be among the strongest and most offensive terms, the regulator held that the majority of listeners at this time of day were “unlikely to have expected to hear the most offensive language”. It took particular note of the fact that the broadcaster had failed to apologise and concluded that Top Fifteens had breached Rule 2.3 of its Code.

This post first appeared on the legal database IRIS Merlin and is reproduced here with permission and thanks. The original post can be accessed here.