„Standardization of Artificial Intelligence” – the 4th RAILS-Conference

On May 30, 2022, the Robotics and AI Law Society (RAILS e.V.), the KU Leuven Centre for IT & IP Law (CiTiP) and the Knowledge Centre for Data & Society invited numerous lawyers, researchers, artificial intelligence (AI) and standardization experts and practitioners at KU Leuven to discuss the freshest issue in the field of AI regulation – the standardization of AI.

A conference report by Susanne Rönnecke, LL.M. (Duke)

Standardization as a mean to regulate artificial intelligence in the EU

The European Commission presented its proposal for a regulation laying down harmonized rules on artificial intelligence, the Artificial Intelligence Act (AIA) April 2021. The AIA attaches importance to standardization for the regulation of the technology. According to Recital 61 of the AIA it should even “play a key role to provide technical solutions to providers to ensure compliance with this Regulation”. Regulation (EU) No 1025/2012 on European standardization defines a standard as “a technical specification, adopted by a recognised standardisation body, for repeated or continuous application, with which compliance is not compulsory”. Standards can facilitate the implementation of AI systems and market access as well as establish uniform legal requirements and ethical values. Nevertheless, the definition of legal requirements by standardization organizations – which are privately organized – raise concerns regarding their democratic legitimacy and judicial control. The conference aimed at providing a platform to discuss those issues.

The First Panel: The Current AI Standardization Landscape in the EU

The first panel provided an overview over the current AI Standardization Landscape in the EU.

Sarah De Nigris, a researcher at the Joint Research Centre of the European Union and author of the study on AI Standardization Landscape published in 2021 by AI Watch, presented the survey of the ongoing standardization activities on AI by European and international standards organizations. AI Watch is an initiative of the European Commission jointly developed by the EC Joint Research Centre and the Directorate General for Communications Networks, Content and Technology in 2018. It aims at monitoring the development, uptake and impact of Artificial Intelligence for Europe and the implementation of the European Strategy for AI. De Nigris started with an explanation on the methodology of the survey. She explained how they, first, collected all Standards regarding AI, and then mapped them to specific requirements of the AIA. In an in-depth analysis they identified essential and core standards more suitable to operationalize the AIA requirements which the European Commission should focus on for implementing the AIA. They were able to recognize significant gaps for the AIA requirements “data governance”, “technical documentation” and “risk system management”.

The second speaker was Salvatore Scalzo, policy and legal officer for artificial intelligence policy development and coordination at the European Commission. He spoke about the Commission’s proposal for the AIA and the role of standards focusing on ongoing activities of the Commission on standardization. After mapping research about the relevance of ongoing standardization activities by AI Watch (as presented by Sarah De Nigris), the EC strongly engages in European and international standardization organizations with a focus on an EU-US cooperation. Further, the EU prepares first standardization requests which should be adopted in the course of 2022. Scalzo highlighted four main requirements for standards which should be addressed by those requests: First, standards should focus on risks common across AI systems (horizontal standards). Second, they should contain implementation methods to verify compliance with technical requirements. Third, standards should consider adequately specifications of SMEs. Forth, the standards development bodies should involve SMEs and civil society organizations in the development process.

The third presentation was given by Jelle Hoedemaekers, an engineer and expert on ICT standardization at Agoria and Co-lead at AI4Begliums’s working group on ethics and law. Hoedemaeker added a national – Belgian – perspective to the subject. In particular, he explained how standards are jointly development by national, European, and international standardization bodies. His illustration of the drafting processes – an exchange of documents, opinions and comments between international and national bodies and industry stakeholders – made clear that a standard reflects the consensus of different stakeholders. Hoedemaekers welcomes the new role of standards as assigned by the AIA. It will put standards in the spotlight and draw the attention to their role for the implementation of technical requirements. He closed his remarks with a hint to the possibility and the importance of involving all stakeholders – industry and civil society. So, he appealed to the audience to get involved in the standard setting process.

In the following panel discussion, chaired by Professor Martin Ebers (co-founder and president of RAILS and professor of IT Law at University of Tartu), a major concern was raised regarding the competition between European and international standardizing bodies. The panellists discussed what the European Commission can do to ensure a true European standardization approach, reflecting European values, without merely adopting existing international standards. They agreed that it is of particular importance to bring together European stakeholders, including civil society organizations.

The Second Panel: Standardizing AI – Pressing Issues

The second panel presented pressing issues of the standardization of AI by two members of the artificial intelligence working group (SC 42) of ISO’s and IEC’s Joint Technical Committee which develops international standards for information and communication technologies.

Colin Crone – also an editor of the data lifecycle framework ISO/IET CD 8183, an associate partner of the British Standards Institute and director of IT Konstruct Ltd. – talked about standardizing data quality. He presented a new project of SC 42 – the ISO 5259 Series – relating to data quality for machine learning and analytics. It addresses the importance of the quality of data for the successful implementation of big data and AI systems. The series consists of five parts which cover terminology (part 1), data quality measures (part 2), data quality management requirements and guidelines (part 3), data quality process framework to ensure data quality for training and evaluation (part 4) and a data quality governance framework (part 5).

Julian Badget – also an associate professor of Computer Science at the University of Bath, member of CEN-CENELEC JTC21 (AI) and IEEE P7003 (Algorithmic bias) and the UN Committee of Experts (Big Data) – spoke about standardizing explainability. Since the General Data Protection Regulation (GDPR) and the EU High Level Expert Group require that an automated decision or AI system allow for human intervention and oversight and an informed decision, explainability needs to be standardized. Badget illustrated that an explainable AI system can express key factors influencing the results in a way that humans can understand. He highlighted that transparency- a system’s ability to communicate appropriate information to relevant stakeholders – is a prerequisite for explanation. A system that is interpretable – meaning that a human can understand the causes for the output by inspecting internal structures – is self-explainable or a white box. In contrast, so called black boxes need explicability measures as traceability, auditability, and transparent communication. Explainability is an important pre-condition in order to detect and avoid bias in AI systems. With ISO/IEC AWI TS 6254 and ISO/IEC TR 24027:2021 he presented two standards dealing with approaches and methods to achieve explainability and to assess bias.

The following discussion – headed by Charlotte Ducuing (PhD fellow researcher at CiTiP) – addressed the different understanding and roles of standards. Whereas the panellists underlined, that standards embody best practices and allows companies to implement technologies efficiently and cost-effectively, legal professionals emphasized the importance of standards for assessing state of the art-requirements for liability considerations. Regarding the importance of standards for legal analysis and their role to substantiate general principles of the GDPR and AIA, concerns were raised regarding the lack of democratic legitimation since standardizing bodies are private organizations. Crone and Badget met those objections by highlighting that organization’s work is transparent and a variety of stakeholders is and can be involved in the drafting process.

General Discussion and third Panel: How to Standardize Ethics and Law in an AI context

After lunch, a panel and general discussion under the lead of Anton Vedder (professor or IT Law and Ethics at KU Leuven) started. The members of the panel represented different point of view on the question how to standardize ethics and law in an AI context. Joanna Bryson is a professor of Ethics and Technology and co-founder of the Centre for Digital Governance at Hertie School in Berlin and a German nominee to the Global partnership for AI, responsible development, use and governance of AI Working Group. Her argument was that legal standards need to be agile, keep capturing best practice and ratcheted up. Olia Kanevskaia, an assistant professor of European Economic Law and Technology at Utrecht University, alleged that the “new approach” harmonization policy and the three European standards bodies are not well equipped for AI standardization. Ursula Pachl, as Deputy Director General of BEUC, the European Consumer Organization, argued that standards should deal with technical aspects, not with legal principles or fundamental rights; they should never replace legal requirements or democratic law-making procedures. Rob Heyman, the coordinator for the Knowledge Centre Date & Society and senior researcher at imec-SMIT, stated that AI ethics standardization should move away from principles to practices creating fewer expert boundaries and more inclusive common practices. Unfortunately, Filiz Elmas, head of Business Development Artificial Intelligence at the German Institute for Standardization (DIN) and project leader of the German Standardization Roadmap on Artificial Intelligence, hat to cancel spontaneously her participation. She wanted to present her view that standards for AI support, from the technical side, the protection against bias, discrimination, and manipulation; thus, they strengthen trust in and acceptance of AI in industry and society.

The audience was invited to react to those statements. A vibrant and engaging discussion arose around opposing points of view. All in all, it can be asserted that the non-mandatory nature of standards and the concretion of legal principles by private organization provokes objections and concerns regarding their democratic legitimization within the legal community. In contrast, standard setting bodies are considered best to facilitate the implementation of legal principles as they are close to industry stakeholders and experienced with describing prerequisites for technologies.

After the discussion, the last panel, chaired by Jan De Bruyne (research expert in Tort Law and AI and lecturer at CiTiP, assistant professor for Digital Law at eLaw Leiden and senior researcher at the Knowledge Centre for Data & Society), addressed geo-political and future challenges of AI standardization.

Adriana Nugter, independent consultant and member of IEEE Standards Association, lecturer at VU Amsterdam and co-founder of QCollective on ethics and technology, presented a new perspective on standards as a mean to increase influence and build geopolitical leverage across many technical domains, including AI. If those standards implement ethical principles, as e.g. transparency or explainability, they spread European values across our boundaries. Nevertheless, international cooperation is essential to ensure compliance and implementation of standards.

Sebastian Hallensleben, head of digitalization & AI at the Association for the German Electrical, Electronic & Information Technologies (VDE e.V.) and Chair of European AI Standardisation of CEN-CENELEC JTC21, identified future challenges for AI standardization: the already discussed lack of democratic legitimacy, the clash of European sovereignty and the monopoly of global companies shaping de facto standards, and the gap between the European will to standardize AI and the reality that the technology is development, the data is collected and the models are trained in different countries and jurisdiction and effecting the people of another jurisdiction.

In a nutshell

The 4th RAILS-Conference illustrated how pressing and complex issues of standardizing AI are. It allowed for different aspects and opinions to be heard and discussed. The participants became an introduction into the relevance of standards for the regulation of AI in Europe, the standardization landscape in the EU, an insight into the work of standardizing bodies and an overview of the contrasting roles of standards.

All in all, most panellists and participants saw the new legislative model and the increasing importance of standards for AI regulation as a chance while also underlining the risks that need to be addressed. It will need a continuing and interdisciplinary debate and monitoring of the standardizing process to ensure fundamental legal principals.


For an in-depth discussion on standardization of AI see M. Ebers, Standardizing artificial intelligence, January 22, 2022, https://blog.ai-laws.org/standardizing-artificial-intelligence/.

Published under licence CC BY-NC-ND. 

This Blogpost was written by

Author

  • Susanne Rönnecke

    Susanne Rönnecke, LL.M. (Duke) is a doctoral candidate and research assistant at the University of Jena with a focus on data protection challenges of artificial intelligence. Previously, she completed an LL.M. at Duke University with a focus on Law and Technology. At RAILS, she is responsible for project coordination since 2021.

Susanne Rönnecke Written by:

Susanne Rönnecke, LL.M. (Duke) is a doctoral candidate and research assistant at the University of Jena with a focus on data protection challenges of artificial intelligence. Previously, she completed an LL.M. at Duke University with a focus on Law and Technology. At RAILS, she is responsible for project coordination since 2021.