• CFRED CUHK Law

Achieving Optimal Disclosure through Algorithmic Arrangement Achieved through Behavioral Data

Fabiana Di Porto - The Hebrew University of Jerusalem


-- What is Law & Technology scholarship, really? How does this nascent inter-disciplinary field of study differ from contiguous disciplines such as Digital, ICT or Cyber law? How is it related to the Empirical Legal Studies or the LegalTech industry?


In my recently published article, From BADs to BEDs. Algorithmic Disclosure Regulation. Theoretical Aspects for Empirical Application”, I theorize the birth of a new field of study, a refocus of Law & Technology that makes use of Machine Learning Algorithms (broadly understood) to perform four tasks: (a) analysis, (b) interpretation, (c) application and enforcement, and (d) enhancement of the law/regulation.


To give a taste of how my point (d), the ‘regulation enhancement’ strain of research, works, I provide as example that of Algorithmic Disclosure Regulation. More specifically, I suggest the use of ML algorithms to save Disclosure Regulation, notoriously one of the least effective and more symbolic regulatory strategies, from failure.


Take the current COVID-19 pandemic as an example: what percentage of society attentively read and are fully aware of the latest health regulations? In light of the complexity and dynamic nature of COVID-19 related regulatory measures, it is unlikely that everyone will take the time to understand the latest rules. Hence, given that the effectiveness of such rules crucially depends on the degree to which people are informed about their rights and duties, there is a need to ensure that this crucial information wins the constant competition for our attention. What if this information could be targeted at our informational needs, differentiated per homogeneous groups, updated automatically as per its content, and provided in a timely fashion?


Consider another example in the antitrust enforcement field. Behavioral commitments often require supervisory trustees to oversee the implementation of transparency duties. What if such transparency duties were made automatic and control over compliance transformed into real-time thanks to algorithms?


My article stipulates how all this could become real by providing the theoretical underpinnings of a comprehensive, inter-disciplinary approach which tackles the failures of disclosures on all levels, from the rulemakers to the industry and the consumers, by harnessing the enormous potential of data-driven, algorithmic rulemaking. I propose to bring together representatives of all stakeholders in a regulatory sandbox, where the ‘best available disclosures’ (BADs) are tested with consumers and consequently perfected to create tailored, ‘best ever disclosures’ (BEDs) by making use of behavioral data generated during the testing process.


This theoretical concept is derived in three steps. The article first identifies different failures of disclosures and discusses how the existing literature responded to them. It shows that most existing approaches to deal with the deficiencies of disclosure regulations are either highly paternalistic or only geared at very specific failures (e.g. the complexity or incompleteness of disclosures). Based on these findings the article not only identifies the most relevant failures in both de iure disclosure regulations as well as the de facto industry disclosure agreements based thereof, it also suggests indicators to quantify the magnitude of these shortcomings (see Table 1).


This will then allow the different disclosures used by the industry to be ranked, linking them to the regulatory goals from the de iure disclosures through a knowledge graph and finding the best-functioning, existing disclosures. These ‘best available disclosures’ (BADs) can then be tested by real-life consumers and industry representatives in a ‘regulatory sandbox’, which will allow to generate behavioral data. This step is as innovative as it is crucial: even if a disclosure is well-understandable, coherent and complete from a theoretical point of view, this does not automatically mean that it will be well-received by all consumers or suitable for all industry needs.


In fact, different groups of consumers might have different capabilities and preferences with regard to how and when to receive information as well as what information to receive. A regulatory sandbox opens the unique opportunity to identify different groups to tailor the way in which they are presented with disclosures, which should make the latter significantly more efficient, as similar approaches used in online advertising suggest. With the help of machine-learning algorithms, this data will be used to transform the tested BADs into the ‘best ever disclosures’ (BEDs) to be implemented automatically by the industry, thus significantly decreasing the cost of complying with disclosure duties.


It must be noted that such ‘algorithmic rulemaking’ proposals are frequently met with well-justified skepticism regarding their compatibility with transparency requirements and other democratic principles. However, since the sandbox stage constitutes a transparent process offering the possibility of intervening with and correcting potentially problematic algorithmic norms, the above-illustrated proposal duly takes into account such concerns.


As a whole, the article presents a comprehensive, balanced and innovative approach aimed at ensuring that we do not drown in the sea of available information by transforming the source of the problem, the omnipresence of data, into its solution: data-driven, Algorithmic Disclosure Regulations.


Table 1: Indexes of Failure of de iure and de facto disclosures – Grading (per domain/sector)


Recent Posts

See All

Copyright © 2018 All Rights Reserved. Faculty of Law, The Chinese University of Hong Kong

The Chinese University of Hong Kong