On the Regulation of Advanced Algorithms
Fotis Fitsilis - Scientific Service, Hellenic Parliament
-- When talking about Artificial Intelligence, or AI, most people perceive it as a technology with human-like intelligence. Technically, however, such a level of technology, so-called singularity, might never be achieved. Today’s AI-based tools are computer programs that are employed throughout a wide range of sectors of human activity. Moreover, they do not rely on a single, but on a group of different technologies. At the core of these technological approaches are computer algorithms, which are strictly-defined, machine-implementable instructions. Hence, the talk about advanced algorithms and the fact that they are able to disrupt whole economic sectors, as well as state institutions. In order to emphasize this statement, let’s focus on three examples.
Digital platforms. With billions of people around the world spending much of their private and occupational time on digital platforms, a great deal of policy debate worldwide has migrated into the digital sphere. Public discourse within privately-owned social networks is mainly monitored by non-transparent algorithmic processes. In democratic societies, such algorithmic conduct may therefore influence, and has seriously influenced, electoral and political activities, as was seen during the 2016 United States presidential election.
Discrimination issues. Programs that use machine-learning algorithms enhance the precision of their predictions using training datasets. These datasets may contain data points that represent real-life human conduct, which may be biased. Hence, algorithms may very well perpetuate discriminatory results. Such machine bias could have serious implications if they enter judicial decisions, for example, as well as voter decision-making during an election.
End-of-time scenarios. Perhaps not imminent, but definitely to be taken seriously, are scenarios that relate to the rise of AI with the end of human civilization as we know it. Science fiction or not, this is to be seriously considered when high-caliber scientific experts like Stephen Hawking and leaders of industry such as Elon Musk are hitting the alarm bell.
With such a significant scope regarding the influence of AI in societal and systemic processes, one needs to at least investigate whether advanced algorithms need to be regulated or not. And if the result is affirmative, to assess the dimensions of their regulation and define their parameter space. These constitute the basic research questions of a year-long study conducted within the Scientific Service of the Hellenic Parliament. A study that resulted in the publication of my recent book, Imposing Regulation on Advanced Algorithms (2019, Springer).
While there are also unconventional calls to regulate algorithms using ancient Greek patterns, like a citizen council, legal analysis and classification of good practice suggest a series of potentially more efficient regulatory actions. For instance, the type of regulation, i.e. legislative and judicial, could be considered. The positioning of regulation in a multi-level governance scheme could also be investigated, as well as its timing. Of particular interest is the nature of regulation, which distinguishes whether regulators impose modifications on the code or on the working environment of the algorithm.
As life cycles of advanced algorithms are becoming shorter, the decision to develop specialized rather than general legislation might be tempting. Another option would be to develop a principle-based legal framework, which relies on legal values and newly established human rights, such as the ‘right to be forgotten’ online. In this regard, recent discussions and developments around ethical AI are encouraging.
When it comes to the stakeholders of algorithmic regulation, the role of civil society in the monitoring of algorithmic conduct could be important. An algorithmic monitor, which is an observatory that uses the capacity of the crowd, might be an interesting proposal in order to spot early on, cases where algorithmic regulation applies. This work also studies the role of parliaments. More specifically, the role of research units within parliaments is investigated as potentially significant to advance the capacity of representative institutions to tackle economic and societal challenges that are related to the application of AI tools and services.
Last, but not least, the role of supranational institutions, such as the European Union and maybe the Council of Europe, is discussed. The Union, which recently produced a white paper on AI, seems to operate at the forefront of regulatory activities worldwide. Legal frameworks, such as the General Data Protection Regulation (GDPR) and the Markets in Financial Instruments Directive (MiFID) suggest this. However, it remains to be seen whether such legal instruments also produce negative effects and overall lead to the curbing of innovation, or weaken the competitive advantage of EU-based enterprises. Perhaps therefore the establishing of such ‘norms’ should fall under general international law and be led by a United Nations office.