• Karni Chagal

Am I am Algorithm or a Product? When Products Liability Should Apply to Algorithmic Decision-Makers

The full paper is to be published at 30 Stanford Law & Policy Review __ (forthcoming) and is accessible at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3241200

One of the most popular questions raised in connection with the legal implications of developing and using AI systems is the question of liability. Who should be in charge for damages caused by driverless cars, a medical application or a robotic surgeons? How would such liability be determined? Should the mechanism for determining liability be more similar to the ones currently applicable to human tortfeasors, or the ones applied to damages caused by machines?

Much of the legal scholarship addresses, in that context, the tort legal framework that applies to damaging products – Products Liability – and its compatibility with AI and self-learning systems. Under products liability, the seller of a defective product is liable for the physical harm caused to the user or her property (liability is determined on a strict or negligence basis, depending on the type of the defect and varying among states). While some argue that AI systems, like other sophisticated systems before them, ought to continue being subject to products liability, many others believe that AI systems are "no longer tools in the hands of humans" and therefore warrant different legal treatment (for example, in the form of no fault insurance schemes, of examining whether the action of the system itself was 'negligence' similar to what is done in the case of a human tortfeasor, or even of granting robots with their own legal status).

An important piece of the puzzle not yet discussed in depth is how to differentiate between 'traditional' systems that may continue being subject to products liability rules, and those systems that indeed might involve 'something unique' that requires rethinking of the applicability of the products liability framework. An autopilot, for example, is a sophisticated algorithmic system that replaces humans in executing complex tasks while reaching decisions in splits seconds in a manner that outperforms humans. Yet autopilots have for years been subject to products liability cases whenever involved in causing damage. What is it that differentiates autopilots from, in example, driverless vehicles that many argue ought not be subject to products liability?

The article analyzes the general concept of "autonomy", often raised as a classifier between 'traditional' systems subject to products liability and those that would require different legal treatment. It explains why using a system's level of autonomy as a classifier would be both very complex and useless. It then proposes an alternative approach to differentiate between the systems, of analyzing the main rationales behind products liability rules (promoting safety and victims' compensation) and examining which concrete features of AI systems are compatible with said rationales and which would undermine them. The article therefore is able to provide a simple easy-to-use indicator of whether an underlying system is one that might warrant novel legal treatment or whether it may remain 'under the control' of products liability rules.

For example, the article found that features such as 'life and death nature of decisions made by the system'; the 'need for immediate response by the system'; or the fact that the system relies on dynamic sources of information- are all associated with incompatibility with the rationales behind products liability (whether because they reduce manufacturer's ability to increase the safety of their products, or because they render accessibility to compensation for victims more difficult).

Using the eight "yes or no" features identified in the article to classify an underlying system based on its ability to promote the rationales behind products liability rules thus gives decision-makers a simple method for deciding whether a system ought to remain subject to products liability rules, or alternatively might warrant special legal treatment.

Karni Chagal, University of Haifa, Faculty of Law

81 views0 comments

Recent Posts

See All

Judging Autonomous Vehicles

Jeffrey J. Rachlinski – Cornell Law School; Andrew J. Wistrich – California Central District Court -- Would you rather be run over by a self-driving car or a car driven by a human being? Assuming a s

Corporate Humanity through Artificial Intelligence

Michael R. Siebecker - University of Denver, Sturm College of Law -- Can existing corporate fiduciary principles adequately guide officers and directors regarding the proper development and utilizati