Blog

EU Artificial Intelligence Regulation: how will it affect the tech sector?

The European Commission presented on 21 April its Proposal for a Regulation on a European Approach for Artificial Intelligence. This EU Artificial Intelligence Regulation, a key element of the 2020 Strategy on Europe’s Digital Future, aims at creating the first-ever legal framework for artificial intelligence, fostering innovation, and maximizing the societal benefits of AI while ensuring the safety of EU citizens by guaranteeing the trustworthiness of AI systems. This article briefly presents measures included in the proposal and provides Dr2 Consultants’ analysis of the potential impact on the tech sector.

Ensuring the safety and trustworthiness of artificial intelligence systems

To ensure the safety and trustworthiness of artificial intelligence systems and applications put on the European market, the European Commission adopted a risk-based approach in its proposal. Depending on the risk posed by a system or its application, different conditions will apply.

Unacceptable risk

The proposal for an EU Artificial Intelligence Regulation lists a number of AI systems or applications that pose an unacceptable risk to the safety, livelihood and rights of people, and that are banned altogether from being commercialized in the EU. Those notably include AI systems or applications manipulating human behavior to circumvent users’ free will, but the wording used by the Commission seems vague and it is unclear how systems will be assessed to determine whether they fall into the banned category.

It also concerns AI systems or applications allowing social scoring by governments (directly targeting systems already introduced in China), and real-time biometric recognition systems used by law enforcement, unless their use is necessary in certain specific cases (such kidnapping, terrorist attack or criminal search).

EU Artificial Intelligence Regulation - Unacceptable Risk

High-Risk

One step down on the risk scale, the proposal lists artificial intelligence systems with “high-risk” use, which are systems with a variety of sensitive applications, from transport (such as self-driving vehicles), to essential private and public services (such as credit scoring) to education (e.g. exam scoring). The proposal also targets public applications of AI, especially in the fields of law enforcement, migration and border control management, and administration of justice.

High-risk systems will be allowed to be commercialized and used in the EU only after complying with conformity assessment procedures, to ensure that the systems or their application respect the EU standards. Although the proposal plans for national authorities to conduct checks to ensure that systems are compliant, the text still intends for many applications to be evaluated through self-assessment, meaning that the AI providers will assess themselves if they meet the conformity criteria set by the EU. Those criteria notably include: having in place adequate risk assessment and mitigation systems, using high quality datasets to avoid algorithmic bias, and ensuring appropriate human oversight. This flexibility is likely to be positively welcomed by the tech industry, but MEPs have already warned that they would support stricter compliance rules.

EU-Artificial-Intelligence-Regulation-High-Risk

Limited and minimal risk

Systems that do not fall under the “unacceptable” or “high-risk” categories are considered of limited or minimal risk, which covers the majority of artificial intelligence systems widely used in the EU at the moment. AI systems of limited risk will need to respect transparency obligations, meaning that users must be informed that they are interacting with an AI system. Systems posing minimal risks, such as spam filters or AI-enabled video games, can be commercialized and used freely.

The application of the EU Artificial Intelligence Regulation will be overseen by a newly-created body, the European Artificial Intelligence Board.

Download AI Regulation Infographic

Fostering European excellence in artificial intelligence

Next to measures to secure AI systems, the proposal also includes a few measures to promote innovation in AI in the EU. The Artificial Intelligence Regulation notably includes an update of the 2018 Coordinated Plan on AI, which sets out a series of actions to be taken by EU Member States and provides funding instruments, financed by the Digital Europe, Horizon Europe and Cohesion programmes, to accelerate investments in AI.

The proposal also targets SMEs to facilitate their access to testing and experimentation facilities as well as digital innovation hubs.

Next steps for the EU Artificial Intelligence Regulation and potential impact on the tech sector

The European Parliament and the Council of the EU, representing Member States, will now both study the proposal and adopt their respective positions, before entering into negotiations. The negotiations are likely to be stormy, considering their diverging positions. Indeed, MEPs support even stricter rules, notably promoting additional applications to fall under the banned category. On the other hand, it can be expected that Member States will adopt a position demanding more leeway on security and law enforcement applications, especially considering that national security remains a national prerogative of Member States.

As the proposal is likely to be reworded and amended during the negotiation process, the impact it will have on the tech sector is not easy to evaluate. However, if the proposal were to be accepted as is, the first impact for the tech sector would be that companies commercializing banned AI systems, or putting on the market high-risk systems that have not gone through the conformity assessment procedure, could be subject to financial penalties of 6% of the company’s total annual turnover. Considering the vague wording used by the Commission, it is difficult to understand how this law will be practically enforced.

There is also a worry from the industry side that this regulation will overburden a sector composed mainly of SMEs and start-ups, without a sufficient support framework to create balance, considering that measures included in the proposal to support innovation remain limited.

There is a strong interest of big tech companies in AI technologies and systems developed by small startups. In the past five years, big tech companies have purchased over 60 AI startups creating systems that can improve products created by the tech giants.

It remains to be seen how this new Artificial Intelligence Regulation introduced by the EU will affect the AI landscape. Will the measures to foster innovation and support SMEs lead to a multiplication of actors or will the additional regulatory burdens created by the regulation burden startups too much, leaving room only for bigger companies?

Moreover, a risk exists that, as it happened with the General Data Protection Regulation (2016), the implementation and enforcement of the European Artificial Intelligence Regulation at national level will be fragmented, leaving the industry to deal with disparities between Member States and an uncertain legal framework that would hamper the EU Single Market and the economy.

Dr2 Consultants continuously monitors the developments in the discussion on artificial intelligence and supports its clients on these matters. Should you be interested in further information on the AI Regulation and how it could impact your business, you can reach out to Dr2 Consultants at info@dr2consultants.eu or find more information on our website.