Artificial Intelligence and the Limits of Legal Personality
Simon Chesterman – National University of Singapore Faculty of Law
-- As artificial intelligence (AI) systems approach human intelligence, take on ever more responsibilities, create things of beauty and value, should we recognize them as persons before the law?
In a forthcoming article for International and Comparative Law Quarterly. I argue that there are at least two discrete reasons why AI systems might be recognized as persons before the law. The first is so that there is someone to blame when things go wrong. This is presented as the answer to potential accountability gaps created by their speed, autonomy, and opacity. A second reason for recognizing personality, however, is to ensure that there is someone to reward when things go right. A growing body of literature examines ownership of intellectual property created by AI systems, for example.
The tension in these discussions is whether personhood is granted for instrumental or inherent reasons. Arguments are typically framed in instrumental terms, with comparisons to the most common artificial legal person: the corporation. Yet implicit in many of those arguments, or explicit in their illustrations and examples, is the idea that as AI systems approach the point of indistinguishability from humans — for example, when they might pass the famous Turing Test — they should be entitled to a status comparable to natural persons.
Until recently, such arguments were all speculative. Then in 2017 Saudi Arabia granted ‘citizenship’ to the humanoid robot Sophia and an online system with the persona of a seven-year-old boy was granted ‘residency’ in Tokyo. These were gimmicks — Sophia, for example, is essentially a chatbot with a face. In the same year, however, the European Parliament adopted a resolution calling on its Commission to consider creating ‘a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently.’
The most immediate challenge is whether some form of juridical personality would fill a responsibility gap or be otherwise advantageous to the legal system. Based on the history of corporations and other artificial legal persons, it does not seem in doubt that most legal systems could grant AI systems a form of personality; the more interesting questions are whether they should and what content that personality might have.
Instrumental reasons could, therefore, justify according legal personality to AI systems. But they do not require it. The implicit anthropomorphism elides further challenges such as defining the threshold of personality when AI systems exist on a spectrum, as well as how personality might apply to distributed systems. It would be possible, then, to create legal persons comparable to corporations — each autonomous vehicle, smart medical device, resume-screening algorithm, and so on could be incorporated. If there are true liability gaps then it is possible that such legal forms could fill them. Yet the more likely beneficiaries of such an arrangement would be producers and users, who would thus be insulated from some or all liability.
An alternative argument is to draw an analogy with natural persons. It might seem self-evident that a machine could never be a natural person. Yet for centuries slaves and women were not recognized as full persons either. If one takes the Turing Test to its logical, Blade Runner, conclusion, it is possible that AI systems truly indistinguishable from humans might one day claim the same status. Although arguments about ‘rights for robots’ are presently confined to the fringes of the discourse, this possibility is implicit in many of the arguments in favour of AI systems owning the intellectual property that they create.
Taken seriously, moreover, the idea that AI systems could equal humans suggests a third reason for contemplating personality. For once equality is achieved, there is no reason to assume that AI advances would stop there. Though general AI remains science fiction for the present, it invites consideration whether legal status could shape or constrain behaviour if or when humanity is surpassed. Should it ever come to that, of course, the question might not be whether we recognize the rights of a general AI, but whether it recognizes ours.
This text draws on material in my paper, "Artificial Intelligence and the Limits of Legal Personality," forthcoming in International and Comparative Law Quarterly and available in draft here.