The EU has begun to debate what is being called the first AI law in the world. It could be approved in the near future: its objective is to prevent uses of AI that result in what are considered to be “unacceptable risks,” such as indiscriminate facial recognition or the manipulation of people’s behavior. AI could be heavily regulated in critical sectors such as health and education, while sanctions and sales bans could be imposed on systems and firms that don’t comply with the legislation. UNESCO has developed a voluntary ethical framework… but this voluntary nature is precisely its main weakness. China and Russia – two countries that use this technology for mass surveillance of populations – have signed on to these principles.

“There are fundamental rights involved and it’s an issue that we have to tackle and worry about, certainly… but with balance,” Danesi cautions. Juhan Lepassaar – executive director of the EU Agency for Cybersecurity (ENISA) – is of the same opinion: “If we want to secure AI systems and also guarantee privacy, we must analyze how these systems work. ENISA is studying the technical complexity of AI to better mitigate cybersecurity risks. We also need to find the right balance between safety and system performance.”