• CFRED CUHK Law

Artificial Intelligence, Discrimination and Social Welfare

Kevin Bauer - Leibniz Institute for Financial Research SAFE

-- The use of algorithms for automated decision making can reinforce existing discriminatory practices and generate welfare losses. In a recent paper, "The Terminator of Social Welfare? The Economic Consequences of Algorithmic Discrimination," I examine this phenomenon together with colleagues from Goethe University.

Driven by advances in machine learning (ML), the use of artificial intelligence (AI) systems to support or even automate human decision making has become an integral part of the operational business of many public and private organizations. Through the broad application of AI systems, organizations expect substantial efficiency gains.

At their core, the majority of today's AI systems consist of ML algorithms, which are designed to learn more or less independently internal representations of real relationships between different variables based on large amounts of data. The learned patterns should ultimately allow the generation of highly accurate individual-level predictions about a certain variable of interest (label) using available information (features). The generated predictions can then be used to inform decisions under uncertainty and in environments of asymmetric information. In the financial sector, for example, AI systems are increasingly used to manage risks on different levels. At the individual client level, ML algorithms harness historical customer data to predict the credit default risk of applicants, classify them as good or bad and ultimately decide on the granting of credit, or support loan officers‘ decision making. Today, consumer lending in many banks is controlled by an almost completely automated process. On the one hand, this trend holds enormous potential for increasing productivity and customer comfort as well as reducing costs. On the other hand, however, there is the danger that the broad integration of AI systems will lead to serious, irreversible social rifts, among other things by systematically favoring or disadvantaging certain groups in algorithmically made or supported decisions.

The systematic discrimination among individuals based on their belonging to social categories is called algorithmic discrimination. Algorithmic discrimination can arise in today's AI systems through different channels. For one thing, through automated, data-driven model development, ML algorithms can learn and reproduce the social inequalities, marginalizations, and discrimination embedded in historical data. In addition, the learned representations can be flawed from the beginning and can map new inequalities if the data used to train and test algorithmic systems is not sufficiently representative for the target populations. For example, non-representative training data containing a disproportionate number of women who were unable to repay a consumer loan may cause a trained ML model systematically to underpredict the repayment probability for women. In accordance with their lower probability of repayment, women would be less likely to receive a loan in an automated system of credit allocation. Due to its high scalability, such an AI system could increase social inequality and significantly reduce the economic welfare of women.

In addition to the direct effects, there is a risk that continued learning of AI systems may steadily exacerbate distortions and discrimination in the future. Such an algorithmic feedback effect can occur whenever the generated prediction of an AI system endogenously influences the structure or nature of the training data available in the future. In the example of consumer lending, the fact that fewer women receive a loan would allow the existing training data to be enriched with disproportionately few novel training examples for women since the actual repayment can only be determined if a loan is actually granted in the first place. This would further distort the already distorted training data set and make a correction increasingly difficult, even if the actual repayment behavior of the group changes over time. If the ML model now learns on the additionally distorted training data, the quality of the predictions for women will continuously decrease, which in turn may lead to reduced lending further distorting the data set. If, as in the example, the ability to collect a new training example together with its label is influenced by the prediction of the ML model (selective labels problem), the additional problem arises that the true performance of ML models is difficult to measure so that an overly optimistic impression of the effectiveness of the machine's deployment may be quickly gained.

This carries the risk that faulty and discriminatory systems are not recognized as such and widely deployed, establishing AI systems as the gatekeepers of economic prosperity. To effectively counteract such a dystopian scenario and to benefit from the huge social potential, comprehensive quality assurance protocols must be developed so that new AI systems can be thoroughly tested for their downstream consequences before they are introduced to the market. Here politics has a decisive role to play. Together with the developers of AI technologies, comprehensive ethical and technical guidelines must be drawn up to ensure that AI systems generate progress and social welfare from which society as a whole, and not just specific groups, can benefit.

Keywords: Artificial Intelligence, Algorithmic Discrimination, Algorithmic Feedback Loops

Recent Posts

See All

Copyright © 2018 All Rights Reserved. Faculty of Law, The Chinese University of Hong Kong

The Chinese University of Hong Kong