• Hilary J. Allen

Driverless Finance


Advances in computing power and data usage techniques are facilitating innovations that have enormous commercial potential – unsurprisingly, the financial industry is on the cutting edge of applying this technology. These technological advances have significantly increased the ubiquity, sophistication and autonomy of financial algorithms in recent years – I refer to this increasingly automated financial decision-making as “driverless finance”. The phenomenon of driverless finance encompasses machine learning algorithms that make risk assessments and smart contracts that represent financial assets, and also extends to autonomous technologies that have yet to be invented.

So far, the reaction to driverless finance has been mostly positive – business models like marketplace lending and robo-advisory have been praised for reducing the costs of, and thus expanding access to, financial services. However, driverless finance has its drawbacks as well, and while consumer protection advocates have already raised concerns about the potential for privacy violations and algorithmic discrimination, there has been almost no discussion of the potential negative externalities that driverless finance could generate for third parties. To begin to address this gap, I explore in my forthcoming article in the Harvard Business Law Review how driverless finance – particularly the delegation of financial decision-making to machine learning algorithms – could generate a financial crisis that results in harmful economic conditions for society as a whole.

The financial industry sees a lot of potential in delegating risk management to machine learning algorithms, because artificial intelligence is able to process vast amounts of information deliberatively and quickly. In many respects, algorithmic risk assessments seem superior to anything a human could generate, and financial industry employees are likely to display automation biases, deferring to these assessments without interrogating their underlying processes. However, there are a number of reasons to be skeptical about delegating risk management entirely to artificial intelligence. First, a machine learning algorithm is only as good as the data set it learns from, and we should be concerned that algorithms are learning from recent data sets that don’t contemplate low-probability but high-consequence tail events – the type of events that precipitate financial crises. Second, the process by which such algorithms select and weight data from the data sets provided to them is opaque to outside observers, and so mistaken conclusions cannot be detected in advance (and if it does become clear that a machine learning algorithm has made a mistake, the technology does not yet exist to teach it not to make the same mistake again in the future). Third, the automation of financial decision-making can also undermine basic assumptions about the use of diversification to manage risk: when financial decision-making is automated and performed by a few algorithms learning from the same data set, preferences may become monolithic and market behavior may become even more correlated than it is at present (a phenomenon that I refer to as “correlation by algorithm”), exacerbating tendencies towards bubbles and panics.

These weaknesses render society as a whole vulnerable to the delegation of too much financial decision-making to machine learning algorithms. However, no alarm has yet been sounded, and financial applications of machine learning technologies continue to be developed without regulatory oversight. I argue that regulators need to oversee the process of developing driverless finance technologies, to guide the development of these technologies in a way that minimizes harm to third parties. And regulators must act quickly: soon, the technology and the industry will be “baked”, and regulators will have missed their opportunity to reduce driverless finance’s potential to generate negative externalities. When it comes to machine learning algorithms being used for risk management, perhaps the most important process regulators can require is the inclusion of tail events in the data sets used to train these algorithms (this may require the development of hypothetical data sets). Without such data, machine learning algorithms will be particularly vulnerable to systemic events, rendering the financial system, and the economy more broadly, more fragile.

Hilary J. Allen, American University Washington College of Law


Copyright © 2018 All Rights Reserved. Faculty of Law, The Chinese University of Hong Kong

The Chinese University of Hong Kong