Published 23 February 2023 by Hanna Kurlanda-Witek
The EU Artificial Intelligence Act: Balancing Innovation With Risks
Artificial intelligence is set to transform many aspects of our lives − business processes, supply chains, employment, as well as healthcare. There are vast opportunities in AI systems, which is defined by the European Commission as “software that is developed with one or more of the techniques that can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.”
AI Turning the Corner in 2023
Nowhere is this more evident than in the viral use of ChatGPT (or Chat Generative Pre-Trained Transformer), which had an estimated 100 million users in January alone, only two months after being launched by OpenAI, a private company backed by Microsoft. The language-model chatbot focuses on generating written content such as emails and blog posts, but it can also have an impact on science and medicine, as it’s already being tested by scientists in writing research papers, and could be used by doctors to summarise patient health records. AI in medicine offers innovations from prevention to aftercare, starting with health apps and wearables to diagnostics, to clinical decision-making and hospital data management.
There are multiple uses for AI in medicine in the pipeline, particularly in consolidating large datasets, such as medical images used for diagnosis, real-time data from smart devices used by patients, or data collected during clinical trials. The focus of the next two decades will be to generate and use the different types of data to advance evidence-based medicine.
Why Regulation is Needed
The soon-to-be ubiquitous nature of AI calls for its regulation, as uncontrolled data collection could put personal and sensitive data at risk, and will eventually lead to bias and misinformation. Even at this early stage, chatbots such as ChatGPT or Bard, developed by Google, are generating incorrect information − for example, fabricating references in research articles.
The European Commission proposed the AI Act in April 2021 with the aim of regulating artificial intelligence in the European Union. In December 2022, the Council of the EU adopted its common position on the AI Act, and the European Parliament is scheduled to vote on the draft AI Act by the end of March. Afterwards, the European Commission, Parliament and Council will enter into discussions about the Act, and the final version could be adopted by the end of 2023.
The policy may become a global standard, particularly since the Act is also set to influence foreign countries that have commercial ties to the EU.
Levels of Risk
The regulation is based on a identifying and classifying the risk of the AI system. There are four risk categories: unacceptable risk, such as social scoring or using AI applications that may be manipulative (these AI systems are banned); high-risk, such as medical devices, which are permitted but subject to strict legal requirements; limited risk, such as chatbots, which must have transparency obligations in place (the user has to know AI is being used); and minimal or no risk, such as video games, which are unregulated in most cases.
Developing AI With User Safety In Mind
The AI Act aims to be a comprehensive policy, covering all types of AI, even if they haven’t been developed yet. There are concerns that once the AI Act is written, enforcing it in all member states may result in regulatory overreach, which could hamper the innovation that the EU wants to attract. But the hope is that putting these guidelines in place will provide companies and start-ups with a roadmap in a time of unprecedented development of the technology, without compromising the fundamental rights of citizens.