Regulating AI in the European Union: Seven Key Takeaways

On the 19th of February 2020, the European Commission (EC) published its long awaited White Paper on artificial intelligence (AI). Much speculation on the content has been going on since several draft versions leaked earlier this year. The White Paper contains policy options to enable a trustworthy and secure development of AI in the European Union (EU), thereby taking into account the values and rights of EU citizens. This blog will briefly discuss the White Paper and highlight seven key takeaways.

An article by Brahim Benichou, Thomas Gils, Jan De Bruyne & Ellen Wauters (Leuven)

  1. Reliance on two building blocks: trust and excellence

The White Paper has two main building blocks, namely an ‘ecosystem of trust’ and an ‘ecosystem of excellence’. Creating an ecosystem of trust will give citizens the confidence to take up AI applications and companies and public organisations the legal certainty to innovate using AI. In this respect, the EC relies on the ethical guidelines issued by the High Level Expert Group on Artificial Intelligence to develop a European regulatory framework for trustworthy AI. To build an ecosystem of excellence that can support the development and uptake of AI, several actions must be taken such as mobilising resources, investing in research and innovation, and creating the right incentives to accelerate the adoption of AI by small and medium-sized enterprises.

  1. Only high-risk AI applications will be specifically regulated (risk-based approach)

The EC stresses that any regulatory intervention should be targeted and proportionate. That is why it does not want to regulate all AI systems but only high-risk AI systems. Systems that are not considered high-risk should only be covered by more general legislation, for example on data protection, consumer protection and product safety/liability. These rules may, however, need some targeted modifications to effectively address the risks created by AI systems.

  1. Several qualification criteria to determine high-risk AI applications

An essential question will be to determine which AI systems qualify as high-risk and, therefore, subject to additional requirements. AI applications should be considered high-risk when two cumulative criteria are met. On the one hand, the application should be used in a sector where significant societal risks can be expected to occur (e.g. transport or health). The EC plans to exhaustively list these sectors. On the other hand, the AI application should be used in the specific sector at stake in such a manner that significant risks are likely to arise. Consequently, not all AI systems in high-risk sectors will be regulated and AI systems that may be regulated in a certain high-risk sector may not be regulated in another sector. The EC also acknowledges that the use of AI applications may in exceptional circumstances be considered as a high-risk by itself,  irrespective of the sector concerned. Examples are the use of AI for recruitment purposes (i.e. risk for discrimination) and for remote biometric identification such as facial recognition.

Taking into account these assessment criteria, a situation of legal uncertainty might arise as organisations will probably argue that their AI application(s) do not qualify as high-risk.

  1. Regulatory requirements will be imposed on high-risk AI applications

The EC identifies several requirements that could be included in the regulatory framework for high-risk AI applications, thereby relying on the ethical guidelines issued by the AI HLEG. These requirements relate to:

  • ensuring that training data is sufficiently comprehensive and representative to guarantee the quality of the outcome, while also respecting the applicable data protection rules;
  • adequate record and data keeping requirements, for instance in relation to the data used for training and testing ,their selection and related training or testing methodologies;
  • providing adequate information and ensuring transparency regarding the use, limitations and capabilities of high-risk AI systems;
  • guaranteeing the robustness and accuracy of high-risk AI systems by avoiding and minimising the different risks it may be vulnerable to (e.g. adequately dealing with inconsistencies and being resilient against attacks and manipulations);
  • ensuring appropriate human involvement ranging from validating and reviewing output generated by AI before it becomes effective to giving humans the possibility to oversee the functioning of an AI system in real time with the ability to intervene and deactivate if appropriate;
  • using AI for remote biometric identification such as facial recognition should be duly justified, proportionate and subject to adequate safeguards when using it in public spaces (it is nearly impossible for commercial applications).
  1. Existing legal framework will continue to apply for all AI applications

The existing regulatory framework will continue to apply to all AI applications regardless of the risks they entail. The applicable legal framework may, however, need some adjustments to sufficiently address the risks created by AI systems. The EC stresses that the legal framework should be improved, for instance with regard to the uncertainty of the allocation of liability between different economic actors, the distinction between services and products or the changing functionality of AI systems. A European approach is necessary considering that several Member States are already exploring options for national legislation to address the many challenges created by AI.

  1. Ecosystem of excellence should enable the EU to become a global AI leader

The EC wants to build an AI ecosystem of excellence to allow the EU to support the development and uptake of AI. This should contribute to the EU’s ambition to become a global leader in AI and data. To achieve this ecosystem of excellence, several actions are required including:

  • better cooperation with and between Member States;
  • investing in research and innovation as well as in the development of AI-related skills: the EC will facilitate and invest in the creation of top-notch research centres and support cooperation between existing research and testing centres. This should eventually enable the EU to take a leading role in different sectors such as healthcare or mobility;
  • supporting SMEs by ensuring that they can access and use AI. Each Member States should have at least one digital innovation hub that has a high degree of specialisation in AI;
  • promoting/enabling partnership with the private sector;
  • promoting the adoption of AI by the public sector.
  1. Mandatory certification for high-risk AI systems, voluntary labelling for other AI systems

The EC also envisages the creation of prior conformity assessments in order to ensure compliance of high-risk AI applications with the mandatory requirements discussed above. These prior assessments could include procedures for testing, inspection or certification. Such an assessment should be part of the conformity assessment mechanisms that already exist for a large number of products before they are placed on the EU market. Where no such existing mechanisms can be relied on, similar mechanisms may need to be established, drawing on best practices and possible input of stakeholders. Regarding AI systems that are not high-risk, voluntary certification/labelling-mechanisms may be established. This will induce AI developers adhering to certain requirements in order to obtain a label/certificate of trustworthy AI. Any prior conformity assessment should, nonetheless, be without prejudice to monitoring compliance and ex post enforcement by competent national authorities.

Concluding remarks and forecast

This blog has identified some important key takeaways regarding the White Paper on AI. These can be used as a departing point for the open public consultation period foreseen by the EC. Although there will probably be many (more) issues brought forward by different stakeholders, three elements will be of great importance. First, the criteria to determine whether AI applications qualify as high-risk and especially the notion of ‘exceptional circumstances’ will probably lead to debate and require further clarification. Second, many questions of liability for damage caused by AI, both under tort and contract law, will need to be addressed as it is a reoccurring topic in all documents. Third, certification is portrayed as the solution to ensure and prove that high-risk AI systems meet the applicable requirements. Whereas certification surely has its benefits, clarity is required and a well-considered approach is necessary. In any case, this White Paper ends a 100 day period of uncertainty and speculations, kicking off a 90-day public consultation period during which researchers at the Centre for IT & IP Law (CiTiP) and the Knowledge Centre Data & Society will gladly provide input.

Published under licence CC BY-NC-ND.

Authors

  • Brahim Benichou

    Brahim Benichou is an academic legal researcher at CiTiP. He focuses on the legal and ethical aspects of AI as part of the Knowledge Centre for Data & Society and on the application of the GDPR in both tech and SME environments. He is also Of Counsel at NautaDutilh (Privacy, Cybersecurity and IT) and co-founded myprivacyspecialist.com.

  • Thomas Gils

    Thomas Gils is an academic researcher at CiTiP, working for the Flemish Knowledge Centre Data & Society. He studied law and philosophy at the KU Leuven and is currently also an associate with DLA Piper (Intellectual Property and Technology) in their Brussels office.

  • Dr. Jan De Bruyne

    Dr. Jan De Bruyne works as senior academic researcher on legal and ethical aspects of AI at the Knowledge Centre for Data & Society. He is a lecturer e-contracts and postdoctoral researcher at CiTiP. He also works as a postdoctoral researcher on AI, liability and certification at the Ghent University Faculty of Law and Criminology.

  • Ellen Wauters

    Ellen Wauters holds a master degree in Political Science (2001) and Law (2010). She has experience in media law, intellectual property and consumer law. Before joining CiTiP, she worked as a legal expert GDPR for the social secretariat for notaries. At CiTiP she is part of the Knowledge Centre Data & Society with a focus on AI and also work on the project Spectre, which focuses on smart cities.

Brahim Benichou is an academic legal researcher at CiTiP. He focuses on the legal and ethical aspects of AI as part of the Knowledge Centre for Data & Society and on the application of the GDPR in both tech and SME environments. He is also Of Counsel at NautaDutilh (Privacy, Cybersecurity and IT) and co-founded myprivacyspecialist.com.