Standardizing Artificial Intelligence

A Critical Assessment of the European Commission’s Proposal for an Artificial Intelligence Act

The Proposal of the European Commission for an Artificial Intelligence Act (AIA) relies mainly on the idea of co-regulation through standardization to ensure that high-risk AI systems comply with the regulation. The following blog article – based on a research paper – criticizes this approach, discussing in particular the concerns surrounding excessive delegation of power in the hands of private Standardization Organizations, lack of democratic control, insufficient involvement of interest groups and limited possibilities of subjecting standards to judicial control.

An article by Martin Ebers

Regulation of AI Systems in the AIA

On April 21, 2021, the European Commission presented its long-awaited proposal for a Regulation “laying down harmonized rules on Artificial Intelligence”, the so-called “Artificial Intelligence Act” (AIA). At the heart of the proposal is the idea of co-regulation through standardization to ensure that high-risk AI system’s providers comply with the regulation: Title III, Chapter 2 AIA contains an extensive list of essential requirements which have to be observed before a high-risk AI system is put on the market. These requirements include, inter alia, the obligation

  • To use high-quality training, validation and testing data
  • To establish documentation and design logging features;
  • To ensure an appropriate degree of transparency and provide users with information
  • To ensure human oversight
  • To ensure robustness, accuracy and cybersecurity.

Standardization as the Cornerstone of the AIA

In order to enforce these obligations, the AIA relies on the New Legislative Framework (NLF). The NLF is characterized by product safety laws that specify only the essential requirements to which products must conform, whereas the task of giving these essential requirements a more concrete form is entrusted to the three European standardization organizations (ESOs) – CEN, CENELEC and ETSI.

The AIA follows closely this approach. According to recital (61) AIA, “[s]tandardization should play a key role to provide technical solutions to providers to ensure compliance with this Regulation”.

Consequently, the above mentioned mandatory requirements are worded in a rather broad way. Instead of formulating the requirements for high-risk AI systems itself, the regulation defines only the essential requirements, whereas the details are left to standards.

Thus, for example, Art. 10 AIA states that training, validation and testing data should be “relevant, representative, free of errors and complete” (Art. 10(3) AIA), to ensure that the AI system “does not become the source of discrimination prohibited by Union law” (Recital (44) AIA), without indicating what forms of biases are prohibited under the existing framework and how algorithmic bias should be mitigated. The same applies to Art. 13(1) AIA and its call for high-risk AI systems to be designed and developed in such a way to ensure that their operation is “sufficiently transparent to enable users to interpret the system’s output and use it appropriately”. Here, again, the AIA leaves open which type and degree of transparency should be regarded as appropriate.

With regard to enforcement, the AIA primarily relies, for most stand-alone AI systems, on an ex ante conformity assessment which should be carried out by the provider under its own responsibility (recital 64; Art. 43(2) AIA), combined with presumption of conformity (Art. 40 AIA), if the provider follows harmonized standards.

The Promise of Standardizing AI Systems

The efforts to standardize Artificial Intelligence (AI) systems are in full swing. International, European and national Standard Development Organizations (SDOs) have begun to develop standards for AI systems.

The European Commission, a number of countries as well as other political actors have high hopes for such standards: Standards could help to establish uniform technical, but also legal and ethical requirements for AI systems. For example, they could define the criteria for quality, explainability, fairness, safety, security and privacy of AI systems – paving the way for uniform certification procedures.

Practical Difficulties to Standardize AI Systems

However, whether legal requirements and ethical values can be translated into standards and technical specifications is an open question. The European Commission seems to be quite optimistic. According to the Impact Assessment accompanying the AIA proposal, the Commission assumes “that a large set of relevant harmonized standards could be available within 3-4 years from now”, that is to say in 2024/2025.

One might wonder, however, whether this is a realistic assessment. According to the German Standardization Roadmap on AI (p. 74), the efforts to develop standards for ethical AI systems are still in its infancy.

In addition, the standardization of AI systems faces a number of practical difficulties that complicate this process further. AI is subject to extremely rapid change of research and development, making standards potentially obsolete very quickly. Moreover, AI systems are used in different industries and sectors, where each has different characteristics and requirements. Additionally, learning AI systems require constant assessment. Apart from this, the quality of many AI systems depends on input data, which makes it difficult to establish universally verifiable criteria. Furthermore, AI systems are “sociotechnical systems”; therefore, it is not sufficient to focus on the technology alone. Rather, the entire process must ideally be taken into account. Moreover, many ethical and legal questions are still unresolved, for example, how to avoid bias and discrimination, or, how to make AI systems more transparent and explainable. Finally, one might wonder whether ESOs even have enough expertise to translate legal requirements and ethical values into standards.

Harmonized Standards as Delegated Rule-Making

Beyond practical difficulties, private standards also create concerns surrounding excessive delegation of power in the hands of private ESOs. Formally, harmonized standards are voluntary rules drafted by private bodies such as CEN or CENELEC. Ultimately, however, there can be no doubt that ESOs exercise rule-making power, since harmonized standards have binding legal effects if published in the Official Journal of the European Union, especially on Member States. According to Art. 40 AIA, Member States must accept all high-risk AI systems which are in conformity with harmonized standards. The imposition of additional requirements under national law on products that are covered by harmonized standards could even lead to an infringement action under Art. 258 TFEU against a Member State. Thus, it can be concluded that harmonized standards do have binding legal effects that are close to those of legal norms.

Lack of Democratic Control

Such a delegation of rule-making power to ESOs is problematic above all, due to the lack of democratic oversight.

According to the Standardization Regulation 1025/2012, harmonized standards are developed exclusively by the ESOs. Neither the European Parliament nor the Member States, although they can object, have a binding veto over harmonized standards mandated by the Commission. In practice, even the European Commission has only limited powers to influence standards. In its Blue Guide of 2016 the Commission emphasizes that “the technical contents of standards is under the entire responsibility of the European standardisation organisations” and is not reviewed by public bodies, because EU legislation does “not foresee a procedure under which public authorities would systematically verify or approve (…) the contents of harmonised standards”.

Lack of Participation of Stakeholders

Another problematic aspect is the lack of meaningful participation of interest groups during the process of drafting standards. According to CEN and CENELEC internal regulations interest groups do not enjoy voting rights, but can only access documents, propose inputs, formulate advice, and submit comments and technical contributions. Furthermore, European stakeholder organizations are also dislodged from any active participation, if CEN/CENELEC just adopt international standards developed for example by ISO or IEC.

Apart from these legal restraints, stakeholder organizations face various de facto obstacles to use the CEN/CENELEC participatory mechanisms in an effective way. Most civil society organizations and consumer associations have absolutely no experience in standardization; they might not even be represented at EU level. Besides, active participation is costly and time-consuming, because CEN/CENELEC standardization committees are dispersed in all corners of Europe, participation in these committees is generally subjected to a fee, a single standard may require years to be published.

For all these reasons, it does not seem very realistic that interest groups will be able to influence the process of standardizing AI systems in the same way as they could in public legislation.

Lack of Judicial Control

Moreover, harmonized standards are in essence immune from judicial review. Although the CJEU decided in the case James Elliott that it has jurisdiction to interpret harmonized standards in preliminary rulings proceedings, it seems unlikely that the Court would also be willing to rule on the validity of harmonized standards, either in an annulment action (Art. 263 TFEU) or in a preliminary ruling proceeding (Art. 267 TFEU).

Even if this was the case, however, it does not seem likely that the CJEU would review and invalidate the substance of a harmonized standard. At the end of the day, the subject of such a dispute could only be the “decision” taken by the Commission to publish a reference to the standard in the Official Journal. Only this action (but not the standard itself which remains the product of a private organization) could be considered as an “act” of European institutions. Accordingly, the CJEU could only control whether the Commission made an error. However, as explained above, this assessment largely relates to formal, but not to substantive issues.

Conclusions

The proposed rules of the AIA for high-risk systems raise serious concerns. For these systems, the European Commission primarily wants to rely on an ex ante conformity assessment, which is not carried out by external third parties, but by the companies themselves – combined with the presumption of conformity, if the provider follows harmonized standards, which are to be developed by ESOs in accordance with the NLF. However, ESOs are clearly overburdened by this task. The standardization of AI systems is not a matter of purely technical decisions. Rather, a series of ethical and legal decisions must be made, which cannot be outsourced to private SDOs, but which require a political debate involving society as a whole.

In light of these considerations, the European Commission should reconsider its approach.

Fundamental ethical and legal decisions should not be delegated to private SDOs, but should be subject of an ordinary legislative procedure and a political debate that can be shaped by industry, civil society organizations, consumer associations and other actors. Accordingly, the AIA should establish legally binding obligations regarding the essential requirements for high-risk AI systems.

These legally binding obligations could then in turn be further specified by harmonized standards for specific applications by the SDOs. Since such harmonized standards can still have far-reaching social and legal consequences, European policymakers should at the same time take the necessary steps to improve the process of standardization. For example, societal stakeholders could be granted voting rights, rights of appeal, access to technical bodies and advisory groups as well as access to existing standards and other deliverables (for non-commercial purposes) without any charge. Such amendments, if adopted, could indeed contribute to better representation of stakeholder interests and counterbalance, at least in part, the negative effects of private rule-making, while maintaining highest standards of technical expertise.

Published under licence CC BY-NC-ND. 

This Blogpost was written by

Author

  • Martin Ebers

    Martin Ebers is Associate Professor of IT Law at the University of Tartu (Estonia) and permanent research fellow (Privatdozent) at the Humboldt University of Berlin. He is co-founder and president of RAILS. In addition to research and teaching, he has been active in the field of legal consulting for many years. His main areas of expertise and research are IT law, liability and insurance law, and European law.

Martin Ebers Written by:

Martin Ebers is Associate Professor of IT Law at the University of Tartu (Estonia) and permanent research fellow (Privatdozent) at the Humboldt University of Berlin. He is co-founder and president of RAILS. In addition to research and teaching, he has been active in the field of legal consulting for many years. His main areas of expertise and research are IT law, liability and insurance law, and European law.