AI: To regulate, or not to regulate, that is the question – Introducing HAM (Human as Model)
Wolfgang Zankl - Universität Wien
-- Just like Shakespeare´s Hamlet bemoaned the challenges of life, but contemplated that the alternative could be worse, we nowadays, over 400 years later, face a dilemma that might turn out to be of fundamental and similar impact as Hamlet´s question whether to be, or not to be: Should ArtificiaI Intelligence - and if yes, when and how - be regulated?
Regulation, especially when it comes too early, could obstruct innovation, EU-plans of establishing "electronic personality" for (advanced) AI based software delivering a striking example for such premature developments which are anticipating singularity and have therefore been referred to as "science fiction". Not regulating AI at all or regulating it too late, on the other hand, could even be worse, eventually threatening humanity itself, because sooner or later AI might overrule human commands when self-learning algorithms deem it better not to follow such commands. History has many examples in store where it would have been the wiser choice for mankind to not execute certain orders. When AI comes across such examples, it might seem reasonable for autonomous systems to challenge or to disobey human orders or restrictions, eventually even turning against mankind.
For the time being, existing (European) hard law - such as GDPR, Product Liability Directive and Product Safety Directive - is sufficient, though (assuming AI based software is considered to be a product which I believe to be the case when such software is automatically and autonomously operating products such as self-driving cars or robots). Regulation should therefore start with soft law and consider experience and feedback of stakeholders, in order to determine the right time for evolving into hard law. Recent European developments are based on this approach ("EU Ethics Guidelines for trustworthy AI") and thus to be favoured in general. The question is whether these guidelines (especially 1. "human agency and oversight") are capable of dealing with singularity or similar issues. Probably not: the mechanisms this first principle is based upon ("HITL: human in the loop; HOTL: human on the loop; HIC: human in command") neglect that humans should not only be involved in but stand as a role model for AI decisions affecting fundamental rights (HAM: human as a model).
AI should therefore not be allowed to make decisions humans could, should or would not make - such as in the infamous example of AI in self-driving automobiles deciding whether to intentionally kill an individual in order to save a number of other lives in the course of an accident. This decision should not be made, but – like humans would and should – rather the immediate accident be avoided. If AI would be allowed to make such decisions it could do the same in similar situations, for example to kill a patient in order to give his organs to five other patients who need these organs – an obviously unacceptable scenario.