Automated Decision Making and the Right to a Human Explanation: is This a Human Right?

Elena Falletti - Università Liuc-Carlo Cattaneo, Italy

-- Analyzing the use of algorithms in automated decision procedures concerns subjective positions having legal relevance. In this regard, the first paragraph of Article No. 22 GDPR (General Data Protection Regulation of the European Union), entitled "Automated decision-making relating to natural persons, including profiling" establishes that: “The data subject has the right not to be subjected to a decision based solely on automated processing, including profiling, which produces legal effects that concern him or that significantly affects his person”.

This norm gives rise to important questions concerning the very nature of decision formulation through algorithms. In hypothesis, the juridical nature or juridical effect of the the concept of “decision” could be restricted to circumstances such as financial solvency, predictability of motor vehicle accidents. In these cases, the "Right to explanation" refers to the algorithmic decision obtained.

The most interesting element, which could give rise to a comparative judicial dispute, given the validity of the GDPR in all the countries of the European Union, concerns the provisions of par. 2 of Article No. 22, letters a) and c) which refer to the implementation of "appropriate measures" to protect "the rights, freedoms and legitimate interests of the interested party”.

What is the "appropriate" counterpart used for implementing such measures? It would seem unquestionable that the only element that can satisfy this "appropriateness" is human intervention, i.e. someone who has the necessary authority, ability and competence in order to modify or revise the decision disputed by the user. Another authoritative source claims that such appropriate measures may also consist of automated systems that control the algorithms, i.e. periodic reviews, introduction of procedures that verify the accuracy of the decision-making process or correct errors, discrimination issues, or inaccuracies from an outdated database in order to avoid self-learning algorithms based on wrong data and processes.

Under this perspective, American doctrine speculates about the "right to human decision", referring to human sensitivity and ability to grasp the nuances of factual circumstances that justify the merit of the decision, and at the same time the use of the appropriate terminology to qualify legal concepts and apply them to the facts to which the automate decision regards. Decisions on mass applications regarding the granting of citizenship or settled status, such as in the Brexit case, or welfare benefits, with important significance and impact of involved citizens and their families. In both cases, these are situations in which the pre-selection of the requirements for obtaining what is required by the applicant is performed by an algorithm.

Nevertheless, there are those who express perplexity about the effectiveness of the right to explanation, especially in relation to the difficulty in interpreting the legislative passages that envisage it within the GDPR itself, together with doubts about the mandatory nature of this explanatory right.

The European perspective of this problem is more elaborate since the enforcement of Article No. 22 GDPR is one of the legal standards provided by European law, the other one regards the compliance of the ADM legal framework under Article No. 8 of the ECHR (European Convention of Human Rights) regarding the protection of the private and family life of the person submitted to the automated decision, together with Article No. 14 ECHR relating to the principle of non-discrimination, in consideration of how impactful the refusal of these measures may be on the life of the applicant and members of his/her family when the algorithm does not consider them valid, but they actually exist.

Under the European discipline summarized as such, it is possible to investigate whether the right to a human decision could be combined with the right to an explanation of this decision reached by the algorithm (which would be a corollary of the right to a human decision itself), and whether from this combination could lead to a specific human right to a human decision.

Recent Posts

See All

Automated Justice and Fairness in the PRC

Straton Papagianneas - Leiden University -- In 2017, the People’s Republic of China (PRC) opened its first internet court in Hangzhou, tasked with resolving e-commerce related disputes. The Hangzhou I

Copyright © 2018 All Rights Reserved. Faculty of Law, The Chinese University of Hong Kong

The Chinese University of Hong Kong