• CFRED CUHK Law

Corporate Humanity through Artificial Intelligence

Michael R. Siebecker - University of Denver, Sturm College of Law


-- Can existing corporate fiduciary principles adequately guide officers and directors regarding the proper development and utilization of artificial intelligence (“AI”) technologies? What role should AI play in corporate boardrooms? These questions seem especially pressing considering the increasing prevalence of AI throughout a variety of industries in a host of key functions.


It should come as little surprise, however, that with the advent of such a powerful new technology, important concerns arise regarding the limits on its use and the ends to which it should be directed. Ethicists warn about AI’s lack of moral sensitivity, empathy, and appreciation for human rights. Most certainly, many ethical questions exist, but if the proliferation of AI remains inevitable, the task of identifying the proper parameters within which to use AI remains of utmost importance.


Despite worries that AI might enhance the likelihood of unethical corporate practices, the increased utilization of AI technologies by corporate managers might promote just the opposite result—AI could make corporate decision-making more humane. But what would that mean? Currently, corporate managers face persistent and increasingly intense criticism for pursuing corporate policies that promote managerial interests seemingly at odds with the basic fiduciary duties of loyalty and care that corporate managers owe to the corporation and its shareholders. Whether casting a blind eye to corporate criminality, using the corporate treasury to pursue personal political goals, ignoring the interests of corporate stakeholders, promoting managerial interests that run counter to shareholder values, or hiding behind the First Amendment to avoid transparency and accountability, corporate managers too often find prevailing decision-making paradigms to fuzzy their fiduciary focus. Recurring waves of corporate scandals seem to be the disappointing result.

AI-assisted corporate decision making, however, could revitalize the fiduciary bond between managers and the corporations they serve by freeing corporate managers to focus more proactively on the core components of what sustaining a robust duty of trust requires and to wallow less frequently in reactionary crisis-management. In that sense, AI could make corporate decision-making more attentive to the interests of corporate shareholders, stakeholders, and the community the corporation inhabits. Though perhaps counterintuitive, an enhanced reliance on AI regarding many mundane aspects of compliance and governance would foster a more mindful—and arguably more humane—attentiveness by officers and directors to the core goals and values the corporation hopes to promote.


In my recent article, Making Corporations More Humane Through Artificial Intelligence, I explore whether reinvigorating corporate fiduciary duties around enhanced corporate discourse remains essential to guide corporate managers regarding the proper development and utilization of AI. Although this might seem an abstruse philosophical exercise applied to a novel technology, in a series of articles over the past decade—Trust & Transparency: Promoting Efficient Corporate Disclosure Through Fiduciary Based Discourse, A New Discourse Theory of the Firm After Citizens United, and Bridging Troubled Waters: Linking Corporate Efficiency and Political Legitimacy Through a Discourse Theory of the Firm—I have advocated the inescapable dependence of meaningful corporate governance on transparent, ongoing discourse between corporations and the constituencies they serve. Rather than proposing some legislative fix to stem persistent corporate malfeasance or insensitivity to shareholder preferences, my research investigates how a more robust understanding of the philosophical concept of trust could redirect existing corporate governance principles and managerial practices.


In Making Corporations More Humane Through Artificial Intelligence, I suggest that a revitalized fiduciary framework based on the philosophy of “encapsulated trust” would allow corporate decision makers to shepherd effectively the development, utilization, and dissemination of AI. Construing corporate fiduciary duties around encapsulated trust would direct AI utilization to enhance the integrity of corporate discourse, diminish corporate corruption, validate a consideration of morality in business decisions, and require corporate directors to embrace a more pluralistic and inclusive approach to corporate decision making. The Article concludes that although AI might not supplant human beings on corporate boards, AI technologies could very well help make decisions by corporate managers more humane.


13 views0 comments

Recent Posts

See All

Judging Autonomous Vehicles

Jeffrey J. Rachlinski – Cornell Law School; Andrew J. Wistrich – California Central District Court -- Would you rather be run over by a self-driving car or a car driven by a human being? Assuming a s

What is Legal Innovation?

Haim Sandberg – School of Law, The College of Management Academic Studies -- Is law itself an arena of innovation – of legal innovation? Can we learn something about the nature of legal innovation fr

Measuring Law Over Time

Corinna Coupette, Max Planck Institute for Informatics; Janis Beckedorf, Ruprecht-Karls-Universität Heidelberg; Dirk Hartung, Center for Legal Technology and Data Science; Michael Bommarito, CodeX – T