How Sophia will be questioned?

Subhaprad Mohanty – National Law University Odisha

-- Sophia, an artificial humanoid robot created by Hong Kong based company Hanson Robotics was granted citizenship by Saudi Arabia in 2017. This step makes Sophia on par with human beings in society with equal access to all the benefits. Hypothetically speaking, what if Sophia committees a felony? Will she be charged by the law enforcement authorities? What will be the appropriate punishment for her? Will the State delete her software and dismantle her body parts? Such kind of ‘legal conundrum’ is going to baffle the policymakers in the near future.

My paper titled ‘The Liability Conundrum in AI: Fixing the Gap,’ presented at the recent Machine Lawyering conference, deals with this specific set of questions pertaining to liability issues in AI driven system. I delineated my paper into two parts- The Identity Gap and The Accountability Gap. Identity gap opens the debate about grant of legal personhood to AI. The absence of legal personality to AI absolves it from any kind of liability whatsoever. The grant of legal personality to non-human entities raises the question whether the same courtesy can be extended to AI or not. The Accountability gap refers the laws and policies adopted by legal system around the world to fix accountability in AI related issues irrespective of the fact most AI based system have no kind of legal liability attached to them.

The Identity Gap section of the paper begins by discussing extensively the article Legal personhood for artificial intelligences authored by Lawrence B Solum. This article raises the key question whether there is any moral and legal justification for grant of legal personhood to AI. The article also formulates two models i.e. Constitutional Personhood model and Trusteeship model to test the anvil on which the question of legal personhood can be analysed. The question of legal personhood gains paramount importance as the liability question can be predefined only by ascribing legal personality to AI based system. The examples of non-human entities like corporations, temples and even environment enjoying separate legal status also ignites the debate about same status being extended to AI.

The Accountability Gap section of the paper begins with scattered examples around the world where the use of AI had hazardous consequences. Right from the Microsoft Tay’s goof up in 2016 to the recent killing of a pedestrian by an Uber driverless car has brought focus on the need to fix accountability for all stakeholders using AI system. Such real life incidents also bring out the need to resolve liability crisis plaguing the AI ecosystem.

The legislative response has been tepid so far. We do not have any form of centralised law in any jurisdiction directly dealing with the legal questions arising because of AI -- barring of course the Grishin Law in Russia which is still in the drafting stage. Similarly, the Right to Explanation under General Data Protection Regulation and Algorithm Accountability Act in United States of America exhaustively addresses the question of bias in algorithms.

One of the main issues discussed in my paper regards the various models proposed by legal scholars over the years to devise a liability formula for AI systems. Starting with George Cole, who relied on product liability theory for AI to David Vladekc who proposed strict liability theory for AI, the debate is unsettling. Interestingly, the European Commission in its recent report Liability for Artificial Intelligence and other emerging digital technologies has struck to the product liability model for deciding the liability question. Similarly, Gabriel Hallevy and J.C Kingston put forth their models in deciding the criminal liability. Their line of reason is opposed to one size fits all approach in attributing liability to AI based systems.

The overall question of liability has vexed academicians, policymakers, the judiciary and lawyers all over the world. In the age of Web 3.0, which is marked by tectonic shift in the internet world with the arrival of AI accompanied by Blockchain and InternetOfThings, data has become the new oil and the concept of privacy has started to resemble a thing of the past. In this age of interconnectedness the role of laws gains utmost precedence. Technology prances forward while law crawls behind. This question of liability must be explored in the midst of this new social upheaval.

70 views0 comments

Recent Posts

See All

Do crowdfunding investors value environmental impact?

Christoph Siemroth - University of Essex; Lars Hornuf - University of Bremen. -- Climate change is increasingly seen as a major societal problem. In our paper, "Do Retail Investors Value Environmental

Collectibles Tokenization & Optimal Security Design

Blair Vorsatz – The University of Chicago Booth School of Business -- Collectibles like art, wine, and classic cars have long interested investors, both for their attractive return profiles and for th