AI Guidelines- By hook or by crook, Europe needs to differentiate itself, in its approach to artificial intelligence, from mighty competitors such as the U.S. and China. To be fair, competitors might not be the right word given that, for the time being at least, there’s no real competition.
According to some of the latest data available, AI investment in Europe totaled $3 to $4 billion in 2016, compared with $8 to $12 billion in Asia and $15 to $23 billion in North America.
Hoping to close the financial gap in the next decade, the EU Commission is for now marketing itself as promoter of a different kind of AI, a ‘trustworthy AI”, dubbed as a technology which respects fundamental rights and ethical rules. A major breakthrough, or just a PR stunt? Probably none of the two, rather the arduous search for a balanced approach.
There’s much to be said in favor of AI’s future impact on society – and against it. AI for instance, could make cancer treatment less toxic, help machines take over dangerous and dirty jobs and researchers sift through online ads to identify possible victims of human trafficking. On the other hand, coupled with facial recognition, it could also boost citizens’ surveillance worldwide, endanger human rights, power killer robots that can select and destroy targets without human intervention.
To steer the change in the right direction, maximizing the benefits while minimizing the risks, the High-Level Expert Group on AI appointed in April by the European Commission, which consists of 52 independent experts representing academia, industry, and civil society, released last Tuesday the first draft of its Ethics Guidelines for the development and use of artificial intelligence (AI).
The experts are now asking for feedback from citizens, in order to better bring into focus some sensitive points and define some rules to limit potential harm from AI systems.
Some of the critical concerns highlighted in the 37-page report are connected to the threats described above. Number one in the list, “identification without consent” is closely linked to facial recognition: new AI-powered software could bring mass tracking and mass surveillance of citizens to a completely new level.
So far this has been limited to highly centralized and authoritarian States like China, but there are signs that something similar could soon happen in the West: just a few days ago, UK police announced that it was testing facial recognition on Christmas shoppers in London.
The key to avoid abuses should be informed consent by citizens, but that is easier said than done. As the London trial has shown, even if the filming is clearly advertised, and passers-by are given the chance to opt-out, not giving consent or trying to avoid the cameras might make you suspect by default.
Another concerns which sparked lively discussions among the experts is related to “covert AI systems”, software and robots that successfully pretend to be human. Think of Google Assistant calling the hairdresser for an appointment, or of the humanoid robots made by Hiroshi Ishiguro. “Their inclusion in human society might change our perception of humans and humanity,” the experts write, and have “multiple consequences such as attachment, influence, or reduction of the value of being human.”
Same goes with “citizens scoring”, which is not necessarily in itself a bad thing: we are all used to be given a score when going to school, or applying for a driver license; but, when every aspect of your life is under scrutiny by a pervasive and not fully transparent algorithm (China social credit systemoffers a good example of that), things start to get really troubling.