Navigating the Machinery Regulation: Key Impacts on Smart Robotics

The European Union’s Machinery Regulation (MR) marks a new era in machinery safety, significantly impacting smart (AI-based) robotics. As robotics technology continues to advance, the integration of autonomous and self-learning systems into machines presents both opportunities and challenges. In this post, we’ll explore how the MR addresses these developments, focusing on the nuances of terminology and the regulation’s interaction with the Artificial Intelligence Act (AIA).

An article by Professor Tobias Mahler

Regulating smart robotics hinges on the precise and consistent use of terminology. The MR, unlike the AIA, does not explicitly use the term “artificial intelligence” (AI) and although smart robots are addressed, this term is not used. Instead, it refers to concepts such as “autonomy” and “self-learning” to describe the capabilities of modern machinery.

Understanding the AI Terminology: Autonomy and Self-Learning in the MR

Autonomy in smart robotics refers to the machine’s capability to operate independently, making decisions without immediate human input during its real-time interactions.This is particularly important for machines that need to navigate and interact with their environment, such as service robots in hospitals. The MR stipulates (Annex III, Part B, 1.2.1) that while these machines can operate autonomously, there must be mechanisms in place for human oversight, ensuring that they can be monitored and controlled when necessary.

Self-learning, on the other hand, relates to the ability of a machine to evolve its behavior based on sensory and other input. Machine learning techniques allow a robot to adapt to new situations over time. The MR takes a cautious approach here, setting limits on how far these systems can learn independently to prevent unexpected or unsafe behaviours.

This careful choice of terminology allows the MR to remain flexible and technology-neutral, accommodating a wide range of machines and future technological developments. However, it also creates potential overlaps with the AIA, which uses a broader definition of AI systems, leading to a need to ascertain which regulations apply to specific technologies.

Consequences of the MR for Smart Robotics

The MR’s focus on safety is evident in its detailed requirements for the design and operation of machinery. This is directly applicable to smart robots. One of the regulation’s core aspects is its emphasis on risk assessment throughout the lifecycle of a machine. For smart robots, this means evaluating not only the physical risks posed by autonomous movements but also the psychological impact on the robot operator.  This broad interpretation of safety underscores the MR’s intention to ensure that robots do not just operate safely but also interact with humans in a manner that is not intimidating or harmful. While it is unclear if this explicitly includes evaluating robots for psychological impacts on third parties, such as vulnerable groups in public or healthcare settings, these assessments are practically essential.

The MR also introduces specific requirements for human-robot interaction (HRI). This includes ensuring that robots can respond to human commands and that their actions are predictable and directable. The regulation’s approach to HRI reflects a broader vision where robots are not isolated from humans but integrated into environments where they can work alongside or interact with people.

Overlapping Requirements: The MR and the AIA

The intersection of the MR and the AIA is particularly significant when dealing with AI systems in smart robotics, requiring careful navigation of overlapping rule sets. In Article 6(1) the AIA categorizes certain AI systems as high-risk, by relating to other harmonised legislation, including the MR. If a robot’s AI component is deemed high-risk under the AIA, it triggers additional AIA requirements that go beyond the MR.

For instance, if a robot’s safety function relies on an AI system with self-learning capabilities, both the MR’s requirements for machine safety and the AIA’s high-risk AI requirements would apply. This dual applicability could increase compliance costs and complicate operations, as manufacturers must adhere to both regulatory frameworks to meet safety standards.

One area of significant overlap is in human oversight. Both the MR and the AIA emphasize the need for human supervision of autonomous systems. The AIA stipulates that high-risk AI systems must be designed to allow for effective oversight by natural persons throughout their operation. This principle mirrors the MR’s requirement for the supervisability of self-evolving machinery.  Although the two rule sets differ in detail, both aim to ensure some level of human oversight. Their effect is that, to manage the various autonomous capabilities of a robot, mechanisms for human intervention are in place to correct or halt the robot’s operations should it stray from safe behaviours.

Another point of intersection is the recording and retention of data. The MR requires ‘enabled’ recording of safety-related data, with a mandatory retention period of one year,  to ensure machinery compliance with safety standards. In contrast, the AIA mandates that high-risk AI systems automatically record ‘logs’ of events throughout the system’s lifespan, for very similar purposes.  This highlights a discrepancy in focus: the AIA on ‘events’ versus the MR on ‘data on the safety-related decision-making process’. Furthermore, the AIA contains a shorter retention period in Article 19. Thus, while both frameworks require recording—albeit of distinct data types—the retention periods differ. This discrepancy is likely to present practical challenges for companies in ensuring compliance with both regulatory frameworks. In addition, if the retained data are personal, the General Data Protection Regulation (GDPR) limits retention to the period necessary for the purposes for which the personal data are processed, which is a more open-ended assessment acknowledged in the AIA, but not in the MR. 

Moreover, the combined application of the AIA and MR pertains only to specific safety-related features of the robot, adding another layer of complexity. Additionally, the robot may run other software that could be classified as either prohibited or high-risk (Article 6(2)) under the AIA. In this regard, the robot functions like any other hardware, with the regulation of the hardware being separate from that of the software it operates. Consequently, applications running on a robot might be regulated as high-risk AI under one part of the AIA, while certain safety features enabled by AI would be regulated under the MR, in conjunction with the AIA.

Conclusion

The Machinery Regulation sets a foundational framework for the safety of smart robotics, focusing on the critical need to manage risks associated with autonomy and self-learning capabilities. By carefully defining these concepts, the MR remains flexible and adaptable to future technological advances. This avoids the significant challenges related to defining what is artificial intelligence, under the AIA. However, the overlapping requirements with the AIA, particularly in areas like human oversight and data recording, highlight the need for careful navigation by manufacturers to ensure full compliance.

The MR’s safety requirements for smart robots represent the first step in a complex regulatory journey, laying a crucial foundation for the future of robotics regulation. Together with the AIA and other parts of the legal framework, this is hopefully a sufficient basis for an initial phase of development. Future legislation must build upon this foundation, ensuring that as robotics and AI technologies evolve, they do so in a manner that upholds the dignity, autonomy and self-determination of individuals, particularly in sensitive areas such as healthcare.

This blog post is a direct result of the ongoing research within the VIROS (Vulnerability in the Robot Society) project at the University of Oslo, a multidisciplinary initiative funded by the Norwegian Research Council. VIROS tackles the pressing legal, ethical, and technical challenges arising from the growing integration of robotics and AI into society. The project specifically explores how regulatory frameworks like the MR and the AIA can effectively mitigate the vulnerabilities and risks associated with smart robotics, particularly concerning safety, security, and human rights. These insights contribute to VIROS’s mission of ensuring that advanced robotics are deployed in ways that are both innovative and socially responsible.

This brief post omits many details and references, which are included in a longer article published in Oslo Law Review.

Published under licence CC BY-NC-ND. 

  • Professor Tobias Mahler is deputy director of the Norwegian Research Center for Computers and Law at the Faculty of Law, University of Oslo. His current research focuses on regulating robotics and artificial intelligence, as well as on risk-based approaches in law, including in the context of digital identity. He has advised the EU Commission in its preparation for the Digital Services Act and participates in the development of standards for artificial intelligence. Mahler is co-founder of the Legal Innovation Lab Oslo (LILO) at the University of Oslo. He has been guest researcher at Stanford Law School and the Max Planck Institute for Foreign and International Criminal Law in Freiburg, Germany. In 2024 he is on a research leave at the Centre for Research on Law and Society (CEDIS) at NOVA Law School, Lisbon.

This Blogpost was written by

Author

  • Professor Tobias Mahler is deputy director of the Norwegian Research Center for Computers and Law at the Faculty of Law, University of Oslo. His current research focuses on regulating robotics and artificial intelligence, as well as on risk-based approaches in law, including in the context of digital identity. He has advised the EU Commission in its preparation for the Digital Services Act and participates in the development of standards for artificial intelligence. Mahler is co-founder of the Legal Innovation Lab Oslo (LILO) at the University of Oslo. He has been guest researcher at Stanford Law School and the Max Planck Institute for Foreign and International Criminal Law in Freiburg, Germany. In 2024 he is on a research leave at the Centre for Research on Law and Society (CEDIS) at NOVA Law School, Lisbon.

    View all posts

Tobias Mahler Written by:

Professor Tobias Mahler is deputy director of the Norwegian Research Center for Computers and Law at the Faculty of Law, University of Oslo. His current research focuses on regulating robotics and artificial intelligence, as well as on risk-based approaches in law, including in the context of digital identity. He has advised the EU Commission in its preparation for the Digital Services Act and participates in the development of standards for artificial intelligence. Mahler is co-founder of the Legal Innovation Lab Oslo (LILO) at the University of Oslo. He has been guest researcher at Stanford Law School and the Max Planck Institute for Foreign and International Criminal Law in Freiburg, Germany. In 2024 he is on a research leave at the Centre for Research on Law and Society (CEDIS) at NOVA Law School, Lisbon.