• CFRED CUHK Law

Artificial Intelligence and more or less strict liability

Updated: Jan 22

Wolfgang Zankl - University of Vienna


-- The authority algorithms are given to act should, from my point of view, be orientated not only on what software can do technically, but also consider how humans would, could and should act in certain situations. I called this HAM: Human As a Model, which means, amongst many other implications, that algorithms may never be programmed in order to abstractly decide whose life is worth more or less, for instance in a self-driving car accident or in so-called triage situations where, to use a current example, not all COVID-19 patients can be ventilated at the same time. When applying the HAM-approach it has to be taken into account that we are increasingly dependent on, confronted with and controlled by software programs – such programs being immaterial objects, whereas legal frameworks are, in general, based on interacting with physical things. Hence, the question is whether risks arising from software – especially when it acts autonomously like in self-driving cars or other machines interacting with humans – can be dealt with by conventional legal frameworks.


Under European law, for example, the Product Liability Directive (PLD) implemented strict liability back in 1985. It defines products as “all movables, with the exception of primary agricultural products and game, even though incorporated into another movable, including electricity.” Related to defective (intelligent) software, it is safe to say that the Directive shall be applicable when such software is directly installed in a machine. In this case it has become part of the product and causes its malfunction.


But what if software is not part of the machine itself, but runs independently operating machines remotely – for example by mobile signals or transmitted by the internet? Should the programmer of the software then be (strictly) liable under product liability rules? Art 1 PLD stating that the definition of a product includes electricity indicates the opposite, because otherwise it would not have been necessary to include electricity. In other words: if immaterial things would have been included anyway, it would – electricity being immaterial – not have been necessary to explicitly mention it. And indeed, it is rather common view that software is not subject to the Directive. In some European countries immaterial things are even excluded explicitly.


This seems somehow inconsistent though, because risks arising from immaterial things are – compared to material things – by nature harder to control and avoid: when a machine is defective, this is usually obvious and precautions can be taken. When software has a problem, the flaw can usually not be detected before it occurs. Under this assumption I believe that software should not only be treated as part of a product when it is incorporated into this product but as an independent product when it is operating remotely by exercising immediate control over the product. In terms of potential hazards there is no reason to treat these two situations differently. When, for example, a device ventilating COVID-19 patients is operated by a malfunctioning program doing harm to patients instead of healing them, the producer of this program should be held strictly liable, no matter if the program is part of the device or operating remotely. The Product Liability Directive, indicating the opposite, is therefore unable to deal with increasing risk potentials of software.


The same can be said – at least partly – about the recent EU-Copyright (DSM) Directive which does not clearly state to what extent it applies to questions that arise in terms of liability from the so-called upstream problem. This problem occurs when software is – in order to learn – fed with IP-protected material. This, using samples, is crucial and typical for machine learning which relies on training. The new Copyright-Directive allows text and data mining for purposes of scientific research (art 3), for other purposes on condition that the use of works has not been expressly reserved by their rightholders in an appropriate manner, such as machine-readable means in the case of content made publicly available online (art 4) (see note at end).


Yet it remains open whether machine learning is mere text and data mining (as considered by Schönberger 2018); but even if it were, the creation of new works by AI imitating protected material clearly exceeds machine learning, as discussed below. I do hope that this aspect will be picked up more precisely by European legislations when the Directive is implemented into national laws. But this implementation is due quite soon (by June this year) and we have seen no attempts so far to consider these issues more specifically than in the Directive itself. So, I rather doubt that this will be regulated conclusively, which means that, for the time being, we have to deal with general rules.


According to these, Schönberger is rather reluctant with IP-liability referring to the update issue. He points out that copyright has always been a matter of public places in a sense that copyright’s primary function is to protect an author from acts of expressive substitution, by controlling the public diffusion of the work to the public. Copyright has never been, Schönberger says, about regulating access to or use of works. This is convincing when existing works are used to make software simply learn what music, literature or painting is all about and to create something new inspired by works of others. In fact, this was the case in the example Schönberger refers to, when researchers had used more than 11,000 novels for training a neural network to model a system that can create natural language.


Another recent example is the so-called Belamy-project, where an algorithm was programmed to create a portrait and trained on a set of software of 15,000 portraits from an online art encyclopedia, spanning the 14th to the 19th century. Amazingly this rather blurry piece of computer art was sold at Christie´s in London for USD 432,000. Even more amazing than this price is another similar project, called next Rembrandt. This is a 3D-printed painting, made solely from data of Rembrandt’s body of work. It was created using deep learning algorithms and facial recognition techniques. Its resemblance with original portraits of Rembrandt is so striking that many experts claimed that it is a more typical Rembrandt that Rembrandt´s own portraits. This not only challenges the notion of what creativity really is, but also raises the question how to treat cases when artists are imitated by software which is fed for example with songs of an artist in order to reproduce his very own personal style and expression, so that the “new” song sounds exactly like from this specific artist.


When it is considered an infringement to copy one song and reproduce it with only slight modifications, could it then be argued that it is even more infringing when several or all the creations of an artist are used to reproduce and publish something that is technically new but still imitating and thus a result of someone else´s creativity or personality? (Leistner 2021) But even if copyright-protection were denied on the argument that a specific piece of work was not reproduced and published, imitating someone´s personality might also be an infringement of personal rights when the very characteristics of a person are more or less copied and duplicated. In the same way that a person is basically protected against being unwillingly photographed or his or her voice being recorded without permission, protection could be granted to someone whose personality is being “copied”. Again, this could be derived from legal logic: When taking unauthorized pictures of a person or recording a person´s voice is an infringement, then is it even more so when someone´s personality is copied?


So, in the end at least such complete and perfect imitations by AI might lead to strict liability as we know it from many legal IP-concepts. It remains to be seen which approach – a rather moderate or a rather strict one – will prevail in the end.


[Note from paragraph five: This relativizes, to some extent, the criticism expressed on models allowing copyright holders to make reservations, see Kung-Chung Liu/Shufeng Zheng, "Protection of and Access to Relevant Data," in Jyh-An Lee/Reto M. Hilty/Kung-Chug Liu (ed.), Artificial Intelligence and Intellectual Property (2021) 379: This “might seem to be a balance between copyright protection and new technology development, but would cripple the effect of the exemption. Data processors would have to find out whether the copyright holders have made reservations and then locate them. This could be time-consuming and difficult, considering the various sources of data and a lack of registration for copyright. To allow reservation would open the door for right holders to refuse to provide access to data mining”. See also Leistner, Protection of and Access to Data Reform under European Law, in Jyh-An Lee/Reto M. Hilty/Kung-Chug Liu (ed.), Artificial Intelligence and Intellectual Property 399: “Reform is mainly needed in regard to certain exceptions to copyright protection, where in particular the very new exceptions for text and data mining in the DSM Directive already seem outdated and insufficient to deal comprehensively with the challenges and opportunities of the data economy”.]


54 views0 comments

Recent Posts

See All

AI & Global Trade Governance: Towards A Pluralism Agenda

Han-Wei Liu - Monash University; Ching-Fu Lin - National Tsing Hua University -- In 2030, the adjudicators of the World Trade Organization (WTO) are sitting in the Geneva headquarters to hear a trade