Published 23 February 2023 by Hanna Kurlanda-Witek
The EU Artificial Intelligence Act: Balancing Innovation With Risks
Artificial intelligence is set to transform many aspects of our lives − business processes, supply chains, employment, as well as healthcare. There are vast opportunities in AI systems, which is defined by the European Commission as “software that is developed with one or more of the techniques that can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.”
There are multiple uses for AI in medicine in the pipeline, particularly in consolidating large datasets, such as medical images used for diagnosis, real-time data from smart devices used by patients, or data collected during clinical trials. The focus of the next two decades will be to generate and use the different types of data to advance evidence-based medicine.
The European Commission proposed the AI Act in April 2021 with the aim of regulating artificial intelligence in the European Union. In December 2022, the Council of the EU adopted its common position on the AI Act, and the European Parliament is scheduled to vote on the draft AI Act by the end of March. Afterwards, the European Commission, Parliament and Council will enter into discussions about the Act, and the final version could be adopted by the end of 2023.
The regulation is based on a identifying and classifying the risk of the AI system. There are four risk categories: unacceptable risk, such as social scoring or using AI applications that may be manipulative (these AI systems are banned); high-risk, such as medical devices, which are permitted but subject to strict legal requirements; limited risk, such as chatbots, which must have transparency obligations in place (the user has to know AI is being used); and minimal or no risk, such as video games, which are unregulated in most cases.
Developing AI With User Safety In Mind
The AI Act aims to be a comprehensive policy, covering all types of AI, even if they haven’t been developed yet. There are concerns that once the AI Act is written, enforcing it in all member states may result in regulatory overreach, which could hamper the innovation that the EU wants to attract. But the hope is that putting these guidelines in place will provide companies and start-ups with a roadmap in a time of unprecedented development of the technology, without compromising the fundamental rights of citizens.