Regulating deepfake identity fraud
Margarita Vladimirova - Deakin University
-- New digital tools appear constantly and their misuse can put in danger our finances, safety, freedom, sanity, relationships, dignity, and even life. One such dangerous tool appears to be deepfakes. Starting from 2016, the creation and detection of deepfake videos have advanced to an enormous degree. People without proper detection software have no ability to recognize the difference between a real video and a fake one, which means they can fall victim to any malicious and deceptive act, whether it be a fake video of a business partner about money transfer, a politician talking about significant issues, or a member of their family asking for help.
Obvious dangers to the social order call for a legal answer. In my recent paper, “Deepfake Identity Fraud Crimes: New Modus Operandi or a New Offence?”, presented at Machine Lawyering’s 2021 conference, I seek an answer in the US regulations as an example of the approach that others could take and then try to predict the Australian response to the misuse of that remarkable digital tool. I also further develop the idea that deepfake crimes are closely related to identity theft crimes.
In US tort, copyright and right of publicity law violations are tightly intertwined in the process of creating deep fakes (Geddes 2020 and Chesney 2018). Whether the footage and photos gathered for the usage is obtained legally or not, changed or slightly modified, IP violations are also present (Jütte 2016). Mention should also be made of criminal harassment and threat law (cyberstalking) or criminal revenge porn and nonconsensual pornography law (Perot and Mostert 2020). Currently there are two main federal bills in both the US House of Representatives and the Senate related to the regulation of deepfake. Those are the Deepfake Report Act (DRA) of 2019 and the Identifying Outputs of Generative Adversarial Networks (IOGAN) Act. The DRA would require the Department of Homeland Security to submit a report to Congress about the technologies used to create and detect deepfakes and require anyone making a deepfake editing to label it with a watermark that shows the deepfake’s fraudulence (Ice 2019).The IOGAN Act would require the directors of both the National Science Foundation and the National Institute of Standards and Technology to submit reports to Congress about potential research opportunities with the country’s private sector in detecting videos that are created with the use of deepfake.
In Australia the Commonwealth Criminal Code Act 1995 provides a relevant offence under s 474.1 of using a carriage service to menace, harass or cause offence. This offence is of broad application and covers a range of potentially offensive conduct committed over a telegraphic network, including the Internet (Kirchengast and Crofts 2019). Same as in the US, privacy, IP and copyright violation was considered and in 2016 the ALRC recommended a new statutory civil action for serious invasion of privacy. At the same time another popular approach is to consider deep fakes as defamation.
However, there is little analysis of deep fakes as an identity crime theft tool, and this paper thus offers an examination of how the use of deep fake technologies fits within the Australian Criminal Code Act 1995 offence of “identity theft crimes”. Such crimes would be designed by using a carriage service to menace, harass or cause offence and deep fake videos would thus be another new tool for identity crime schemes. However, taking into consideration various aspects of the creation process and implementation, deepfake videos also can violate IP and privacy law, as well as fall under tort law. With such a complex nature, this combined instrument of equipment and carriage service should encourage legislators to highlight it exclusively in a separate regulation, subjecting it to the direct and strict control it deserves.