AI regulation in Canada: Protective and Innovation-enabling

Global AI regulations aim to address the risks associated with the deployment of AI systems while simultaneously fostering responsible innovation. In essence, lawmakers find themselves walking on the razor edge of the paradigm: regulation versus innovation. Within the Canadian context, there is a perceived risk of policymakers sliding towards the innovation side, particularly due to the delegation of enforcement oversight to the Minister of Science, Innovation, and Economic Development, raising concerns that industry interests may be prioritized over the human rights aspects of the act. Beyond this controversy, this post provides insights into the approach taken by Canada’s Artificial Intelligence and Data Act (AIDA) to navigate this delicate balance, reconciling both the protective and innovation-enabling roles of AI regulation.

An article by Renan Gadoni Canaan

In 2019, Canada’s Treasury Board introduced the Directive on Automated Decision Making (DADM), to enhance transparency in administrative algorithmic decisions within the federal administration. The aim was to minimize potential adverse effects from automated decisions in various domains, including credit, immigration, and access to healthcare.

DADM introduced a new tool known as Algorithmic Impact Assessments to address risks associated with AI systems. This toolkit comprises questions designed to assess the impact of an intended automated decision system on four key aspects of the target population: their rights, health, economic interests, and sustainability. It also includes a set of questions about strategies to mitigate potential harm. The final score derived from these responses categorizes the AI system as having low, moderate, high, or very high impact. Depending on this categorization, specific measures are mandated, such as consulting experts, publishing reviews on the Government of Canada website, and sharing specifications of the automated decision systems in peer-reviewed journals.

This limitation of ADMD’s application exclusively to the federal public sector aligns with Canada’s historical approach to segregating digital regulations for the public and private sectors due to their distinct priorities. For instance, the initial data protection regulations established several decades ago, including the Privacy Act and PIPEDA, clearly delineate their responsibilities between the federal public sphere and the private sector, respectively.

Protective role: AIDA’s Mitigation of AI Risks

The third part of Bill 27, known as the Artificial Intelligence and Data Act (AIDA), establishes comprehensive principles and definitions governing the administration and enforcement of AI systems. AIDA aligns with the Canadian tradition of differentiating between public and private regulation. Its application is limited to the private sector, with no coverage of the public sector. Also, as a federal bill, AIDA’s authority extends only to interprovincial and international commerce, excluding intraprovincial trade from its scope.

Guided by consumer protection and human rights frameworks, AIDA follows a global trend in AI regulation, focusing on risk-based regulation, similar to legislations such as the EU AI Act, and the 2023 Brazilian AI Bill. AIDA primarily regulates AI systems classified as “High-risk” due to their potential for causing significant harm. However, in contrast to the EU AI Act, AIDA lacks an unacceptable-risk level. Consequently, AIDA does not explicitly forbid the deployment of any particular AI system.

Notably, AIDA’s original draft lacks a precise definition of high-risk AI systems, leaving this determination to future regulations without requiring legislative approval. By transferring the responsibility for establishing norms from legislation to regulations, AIDA aims at enabling an agile regulatory response. The goal is to facilitate swift responses to emerging challenges posed by rapidly evolving developments in AI, allowing for routine updates to high-risk definitions without the need for lengthy legislative processes. Following substantial criticism during the legislative procedure, the Standing Committee on Industry and Technology passed a motion for the Ministry of Innovation, Science, and Economic Development to clarify its position on the lack of definitions for high-risk and provide further details on the proposed amendments to the act. This clarification was provided in October 2023, introducing more specific definitions for high-risk AI systems.

The responsibility for the administration and enforcement of AIDA lies with the Minister of Innovation, Science, and Industry, who is endowed with specific powers. The minister can, “upon reasonable grounds to believe”, request an audit, recommend changes based on the audit findings, or mandate the cessation of high-risk AI systems’ use. If none of these measures proves effective, two alternatives are available: imposing Administrative Monetary penalties or pursuing prosecution as an offense. Additionally, the minister is tasked with establishing the ‘Artificial Intelligence and Data Commissioner,’ a pivotal role in supporting these responsibilities. However, it is not clear what powers and duties the appointed commissioner holds.

To address risks, AIDA mandates a range of toolkits tailored to different stages of an AI system’s lifecycle, including system design, development, deployment, and operational management. These toolkits involve preliminary risk assessments, careful evaluation of dataset biases, integration of human oversight mechanisms, robust documentation maintenance, and periodic risk reevaluation.

Enabling role: AIDA’s Impact on Innovation

Strict regulations often raise concerns about potential legal barriers and increased development costs that may hinder innovation. Small and medium-sized companies, lacking the resources of larger organizations, including dedicated compliance departments, face even greater challenges. Having this in mind, AIDA asserts that small and medium companies should receive particular assistance with compliance, although it does not specify the nature of this assistance.

Also, advocates of the open-source movement, as well as developers in research institutes and universities, express concerns about the potential stifling of innovation due to law enforcement on open-source AI systems across the globe. They argue that such enforcement could hinder collaborative development. AIDA addresses this issue by explicitly stating that AI systems for research purposes and open-source AI models will not fall within the legislation’s scope unless they function as fully developed AI systems.

Furthermore, the negative impacts of regulations on innovation may not hold true if regulations are designed in a pro-innovation fashion. In the context of AI developments, well-shaped regulatory approaches to foster innovation may take different fashions:

Initially, it is imperative that regulations play a pivotal role in delineating the landscape within which AI evolves, thereby diminishing uncertainties surrounding the integration of AI innovations into both local and international markets. In line with this imperative, the AIDA has underscored in its recent recommendations a commitment to harmonize with globally recognized standards, exemplified by the EU Act and OECD AI principles. This strategic alignment is anticipated to streamline compliance processes, lower entry barriers, and enhance the competitive positioning of Canadian firms in the dynamic realm of international AI development.

Secondly, AI regulations should innovate the regulation per se, changing the way they are designed and implemented to adapt to the context of AI. A well-known innovative regulatory mechanism for enabling innovation is regulatory sandboxes – defined as “controlled environments where companies can test their innovative products, services, or business models under the supervision of regulatory authorities”. AI sandboxes have already been introduced by the EU’s AI Act and several other AI regulations throughout the world. Nonetheless, AIDA has no provision to create a framework for AI regulatory sandboxes. However, successful sectorial regulatory sandbox experiences were developed in Canada, such as in Advance Therapeutic Products, implemented by Health Canada, and fintech regulatory sandboxes, implemented by the Canadian Securities Administrators.

Conclusion

AIDA may protect against harms arising from the use of AI systems by adopting a risk-based approach and requiring the application of a toolkit throughout the entire life cycle of high-risk AI systems. Additionally, it may encourage innovation by excluding open source and research AI systems from its scope, while also facilitating the compliance of Canadian companies with international standards.

However, some argue that AIDA’s broad definitions regarding high-risk systems and the lack of details regarding the role of the Minister and appointed Artificial Intelligence and Data Commissioner, as well as an absence of widespread public discussion, warrant a reconsideration of AIDA. If the bill does not pass in Congress and is abandoned, it could present an interesting opportunity to include more specific definitions, engage in broader discussions with various stakeholders, and introduce mechanisms to promote innovation.

Published under licence CC BY-NC-ND. 

This Blogpost was written by

Author

  • Renan Gadoni Canaan

    Renan Gadoni Canaan is a MITACS fellow at the Centre for Law, Technology and Society (Ottawa, Canada). He is currently pursuing his doctorate studies at the University of Ottawa with a focus on global AI regulations.

Renan Gadoni Canaan Written by:

Renan Gadoni Canaan is a MITACS fellow at the Centre for Law, Technology and Society (Ottawa, Canada). He is currently pursuing his doctorate studies at the University of Ottawa with a focus on global AI regulations.