Machine lawyering is the summation of algorithm plus data-driven DX technologies
Wanbil Lee – iEthics, Hong Kong
-- Many of us get excited about DX technologies, especially AI, but are uncomfortable to proceed. On the one hand, they believe in the mythical powerful algorithms that are smart and non-biased and can make better decision than humans, and on the other hand, they hesitate because of the “legal, political, cultural and ethical challenges” each of these technologies poses (Grooman, 2019). The hesitation is not uncommon and the phobia is understandable. Knowing the techno-ethical threats and understanding the technologies helps; acquainting with some method of analysis to untangle the issues arising from the threats would boost confidence.
Briefly, Cloud Computing is a paradigm of anything involved in delivering hosted services over the Internet; Artificial Intelligence is about building smart machines capable of performing tasks that typically require human intelligence (such as "learning" and "problem solving"). Machine Learning is a subset of artificial intelligence aiming to build algorithms and statistical models that can perform a specific task without using explicit instructions but relying on patterns and inference instead. Internet of Things is a system of interrelated ‘things’, each of which is provided with unique identifiers and an ability to transfer data over a network without human-to-human or human-to-computer interaction, and the things can be computing devices, objects, animals or people, mechanical and digital machines. Finally, Big Data is not only large data storages (which are diverse, complex, and grow at ever-increasing rates beyond the capability of traditional data-processing application software), but also processes (which comprise specific set of techniques and methodologies to uncover insights from datasets and convert the informational assets into value).
Threats common to DX technologies are privacy and security risks which attribute to an IT environment, noticeably, data breach, malicious insiders and denial of service or distributed denial of serve. There are numerous others. Cloud-specific threats attribute to its multi-tenant environment and the following three exemplify: Reduced Control and Visibility due to Increased Complexity Strains IT Staff, Downtime, etc., Incomplete data deletion, and Vendor Lock-In. AI/ML related threats originate in the design of the algorithms and the training of the apps. Since any algorithm is just as smart as data fed by the trainer, and coloured by the designer’s past experience and sense of morality, the most disturbing threats are bias (built in the algorithm), history on which AI relies, and fairness of outcome that AI delivers. Big Data specific threats relate to security as well as privacy, for example, selling of data, surveillance, disclosure, discrimination, lack of transparency, representativeness, sample selection, non-human and bad faith actors, and the reproduction of social biases, conclusions derived from erroneous data patterns, etc. IoT related threats could arise from closely connecting a huge number of devices and a concentration of personal data, not only invading public space but also perpetuating normative behavior, resulting in “spy on people in their own homes”.
Techno-ethical issues arising are usually complex and inter-contradictory among stakeholders. Ethical Matrix (a decision support framework and a checklist of concerns) can be used as a means for promoting structured discussion and untangling the issues. If necessary, the Hexa-dimension Metric may be attempted to measure the decision for balancing possible conflicting needs of the stakeholders.