• Daithí Mac Síthigh

The Road to Responsibilities for Internet Intermediaries


With legislative change to intermediary liability now seeming more likely than ever, a long-established assumption, which is that the legal exposure of intermediaries is a key (and often effective) lever through which the availability of Internet-delivered content to mainstream audiences can be controlled, is seeing renewed attention.

Over half a decade ago, I (and others) pointed out that the law on intermediaries was fragmenting, with different approaches being promulgated in different areas of law (e.g. copyright, defamation, or privacy). In my latest work on the topic (forthcoming in Information & Communications Technology Law, and available on SSRN and at the publisher’s website (£)), I found myself reflecting on and ultimately agreeing with Lilian Edwards’ recent argument, in her chapter on intermediaries in Law, Policy, and the Internet, that a general narrative against liability (denying moral responsibility and fearing the collapse of the information society) was deconstructed in the 2000s and ‘lie(s) in shreds’ now.

The current trend in the field of intermediary law is to propose new measures rather than amending the law on liability. Indeed, those who are critical of the current position of online content are clearly concentrating their efforts not on amending the general rules on liability, but identifying new ways in which the desired effects can be brought about. In some cases, this may in practice deprive these general rules of much of their commercial and reputational relevance. I place particular emphasis on the cumulative effect of statutory and non-statutory measures, conjoined with press and popular sentiment that is increasingly critical of the power of the technology industries (what some have called a ‘techlash’), as relocating arguments regarding responsibility and duty, which would have been unthinkable or at least at the fringes of political debate in earlier years, to the mainstream of media and Internet regulatory conversations.

The first wave of Internet-related legislative changes and landmark cases on this topic in the late 1990s addressed a fundamental question of whether, and if so to what extent, to fix through legislation the degree of liability that would be faced by intermediaries, under existing causes of action such as in defamation and other torts. Famously, US law broadly excludes the possibility of liability for many intermediaries, though takes a narrower (conditional) approach in respect of copyright. The European Union’s E-Commerce Directive takes a broad approach for mere conduits (Internet access providers) and a somewhat vague conditional approach for hosts, supplemented by specific provisions elsewhere (e.g. for injunctions in copyright and other IP disputes).

There is plenty of evidence that change is coming – or is already underway. For example, new statutory provisions affecting certain intermediaries have been adopted within the last two years; the infamous ‘article 13’ (now article 17) of the new Directive on copyright, though new obligations for ‘video sharing service providers’ added to EU media law (Audiovisual Media Services Directive) are also provoking debate as member states set about implementing them. More interesting again is the proliferation of ‘voluntary’ measures (e.g. on hate speech, illegal content more generally, and on disinformation or ‘fake news’), and evidence that attitudes to the duties of those in the tech sector, at least in the sense of good corporate behaviour, are hardening. Even the now-aborted attempt by the United Kingdom to extend the regulation of sexually explicit material through various means including a statutory obligation on ISPs to block websites not compliant with regulatory obligations (e.g. age verification) demonstrates the way in which systems developed in a non-statutory context (e.g. the industry-led blocking of indecent images of children) are attractive to lawmakers with other problems in mind.

Unsurprisingly, alternative approaches are being considered by various actors. In New Zealand, legislation provides for the cross-cutting control of ‘harmful digital communications’ The 2015 Harmful Digital Communications Act is neither an amendment to existing law nor a new, standalone criminal offence, but an interlocking set of incentives and new remedies; it provides for a complaints handling body (NetSafe), and a set of guiding principles (prohibiting e.g. the disclosure of sensitive personal facts). The intention is that NetSafe deal with complaints in the first instance, with the courts being able to act where NetSafe’s requirements have not been complied with, or where its response was not satisfactory. In the UK, a recent White Paper sets out a new approach to ‘online harms’, characterised by a focus on a possible statutory ‘duty of care’ for service providers and a new regulatory infrastructure to support it; it sets out thoughts on how ‘digital products and services [can be] designed in a responsible way, with their users’ well-being in mind’.

Even while working on this project, the attention to Internet regulation in the UK continued to intensify. This has included the House of Lords Communications Committee, asking questions such as ‘is there a need to introduce specific regulation for the internet?’, and ultimately recommending a set of 10 principles to be pursued by an ‘Internet Authority’, and the House of Commons Science and Technology Committee, asking about the impact of social media use by children, and proposing a principles-based regime, new duties, transparency obligations, and a statutory code of conduct. The Government proposes to make the UK ‘the safest place in the world to be online’, and civil society groups made proposals ranging from an ‘Office for Responsible Technology’ which would, amongst other things, ‘support people to find redress’ for harms, both individual and collective, related to technology, and ‘online harm reduction’ through a a statutory duty of care (which would not replace any existing remedies) and a ‘social media harm regulator’.

Any new approach will face a number of design and implementation dilemmas. One will be the balance that is struck between judicial (or similar) resolution of disputes and approaches led by a regulatory or administrative agency (and if an agency, which agency. Many have sought to incentivise out-of-court settlement of disputes, especially where swift action rather than compensation is the priority. A second is whether to proceed on the basis of national boundaries or to think more broadly. As the UK prepares for Brexit, some in Government have suggested that there may be opportunities to construct a better system; however, the UK is unlikely to have a completely free hand here, not least as the ECD (never mind the GDPR) is an important component of the EU’s single market. Future negotiations on market access between the UK and the EU will surely see some discussion of the harmonisation of liability or other legal duties which might constitute barriers to trade.

Overall, I find ample evidence of a discursive shift towards a ‘media’ paradigm, for services that have fallen instead within the general category of information society services under the E-Commerce Directive for the last two decades. Non-legislative measures, such as the European Commission’s approach to ‘illegal content’, point towards special responsibilities for platforms who do not exercise editorial control in a conventional sense, as do passing comments in the ECtHR’s Delfi decision regarding duties and responsibilities in light of the ‘particular nature of the Internet’. What seems to emerge is a new notion of responsibility, though still not ‘editorial responsibility’ in a full sense; this parallels an important distinction between accountability and liability made by Martin Husovec in his work on copyright law.

Daithí Mac Síthigh - Queen’s University Belfast


Copyright © 2018 All Rights Reserved. Faculty of Law, The Chinese University of Hong Kong

The Chinese University of Hong Kong