Artificial Intelligence vs. Data Protection
On November 29, 2017, my night flight to Hong Kong (where I was supposed to speak at CUHK about the new European General Data Protection Regulation, “GDPR”) was delayed, so, too tired for other activities, I found myself thumbing through China Daily. Much to my surprise and wide awake in a second, I came across an article reporting that Facebook had just a couple of days ago announced it is using artificial intelligence technology (AI), including pattern recognition, to detect whether someone is expressing thoughts of suicide in a post or live video. “This will eventually be available worldwide, except the European Union”, Facebook said.
I was not surprised that such algorithms had now gone live: Mark Zuckerberg had already suggested in February and again in May – having experienced a cluster of live-streamed suicides on Facebook in April – that the company would use AI to identify problem posts across its network and provide help. I was surprised that a prediction of my talk at CUHK had come true even before I made it public. I had assumed that the GDPR might make big data players, such as Google or Facebook, want to (partially) withdraw from the European market due to massive sanctions (20 million Euros or - whichever is higher - 4% of worldwide annual revenue) as a result of non-compliance with GDPR-rules. Moreover, that these rules apply not only to EU-companies but to corporations worldwide as soon as they process personal data of “data subjects who are in the Union” (Art. 2, obviously not even restricting the GDPR to EU-citizens, but extending it to tourists, refugees or others who “are” in the EU).
Facebook´s new AI contradicts GDPR-rules in many ways: First, because it is processing sensitive data (concerning health) which, under the GDPR, is prohibited unless data subjects give their explicit consent (Art. 9). Second, the GDPR stipulates that the data subject basically has the “right to erasure” (Art. 17) and the “right to object” to data processing at any time (Art. 21), whereas Facebook confirmed that users cannot opt out of the scans performed by the new AI supported software. It therefore comes as no surprise that Facebook is not willing to offer its new service to users who “are in the EU”.
Whether that´s good or bad remains to be seen. What we know, however, is that not only does the GDPR prohibit processing of AI generated personal data the way Facebook is doing it, but concerns have been raised that Facebook did not say if it would use AI similar to the suicide prevention tool in other situations. Alex Stamos, Facebook´s chief security officer, responded to such concerns with a tweet indicating that Facebook does take responsible use of AI seriously: “the creepy/scary/malicious use of AI will be a risk forever, which is why it´s important to set good norms today around weighing data use versus utility and be thoughtful about bias creeping in”. But this does not explain why users cannot opt out, and Zuckerberg himself pointed out that “AI will be able to identify different issues beyond suicide as well”. Moreover, Stamos´ tweet also fails to explain how the algorithm actually works.
An understanding of such algorithms is crucial to fully understand and judge data privacy questions. Because when algorithms are self-learning (and this is the earmark of AI) the processor is no longer able to determine which data will be generated. It´s up to the algorithm. This raises a number of new and complex legal questions. What Facebook has let us know so far (looking for telltale signs, such as phrases like “kill myself” or comments asking users “are you OK or can I help?”), is not learning AI, strictly speaking, but can be performed by conventional search engine technology. Self-learning would mean, for example, that the algorithm is able to detect, that – because certain phrases are often used in combination with obvious indicators like “kill myself” – accounts should be flagged even when only using these less explicit phrases (i.e. “no more friends”, “desperate”), especially when appearing with an increasing frequency. If that is what Zuckerberg meant when he said that “AI will be able to understand more of the subtle nuances of language,” then the tool is indeed converging to AI. Be that as it may, Facebook is not using the new technology in Europe, and this shows how in our current market (protecting) data privacy can eventually deprive individuals of basically use- and helpful AI services.
Wolfgang Zankl, Vienna