From Disclosure to Advice through Big Data Analytics
Sean H. Williams – University of Texas School of Law
-- Recently, both in this blog and elsewhere, scholars have argued that AI and big data will one day enable what I call Big Data disclosures. These disclosures could be customized for various subgroups, or even personalized for each individual. Big Data disclosures have the potential to mitigate some of the key weaknesses of non-personalized mandatory disclosures.
In my forthcoming article, “AI Advice: The Irony of Big Data Disclosures and the New Advice Paradigm” I argue that reliance on Big Data disclosures is misplaced. I begin by revealing a key irony of such reliance: the technological progress required to create effective Big Data disclosures will itself substantially reduce the need for such disclosures. In this future, advice, not disclosure, will be the dominant paradigm.
To create the near-perfect Big Data disclosures that some scholars envision, the AI would have to understand each individual’s goals, preferences, tolerance for risk, earning capacity, etc. But once it has this level of information, disclosure itself is largely outdated. Consider the following analogy: current mandatory disclosure regimes create a frustrating game of connect the dots. The government requires offerors to provide you with large amounts of data (the dots) and leaves it to you to decide, first, which dots are relevant to you, and second, how to connect those dots in order to create some picture of what the world would look like if you make a particular choice, call it Choice X.
Big Data disclosures reduce the number of dots. They do so by predicting which information is most relevant to you, and how to make that information more accessible to you. They can only do so if the AI has some sense of what the world would look like if you make Choice X. The interest rate on a loan may be particularly relevant to you, and the late payment penalties might not. The AI knows this because of its analyses of big data which revealed your personality traits, income, spending habits, etc. This AI is already painting pictures that show the effects of Choice X (and Y and Z). But once the AI has painted these pictures for itself, it seems somewhat cruel to offer you only a connect-the-dots version. Why not just tell people that which choice is likely to be better? This is what advice does.
AI advice has significant advantages over even highly-customized Big Data disclosures. As recent critiques have made clear, literacy and numeracy problems are staggering. Even if we could overcome these, decision fatigue and our general reluctance to make difficult decisions makes the disclosure paradigm fundamentally flawed. Advice overcomes these barriers. Advice does not require that you understand interest rates or complex decision trees. It can be as simple as “don’t get the extended service warranty,” “get the 30-year mortgage, not the 15,” or “choose the prepaid plan because it is a far better fit for your data usage patterns.” This requires no numeracy and very little literacy to understand. It largely relieves you of the burden of decision fatigue, and makes any difficult tradeoffs for you.
AI advice also has powerful advantages over classic nudges like default rules. One enduring critique of setting default rules in order to nudge people in particular directions is that they are not transparent and border on manipulation. Advice, almost by definition, is transparent about its ultimate goal (to influence behavior) and its method (rational persuasion). The advice paradigm therefore avoids many of the critiques levied against nudges.
To date, the potential benefits of advice have been hidden by the current limits of creating good advice. Simply put, everyday advice can be bad. But as AI becomes more sophisticated, as big data becomes more all-encompassing, and as it becomes possible for the AI to run a dizzying number of mini-experiments in real time, those limits will recede. Accurate AI advice will become a real possibility, perhaps sooner than we think.