Unfinished Architecture? Poland’s Draft Act on AI Systems – and the Struggle for Supervisory Clarity

The AI Act (Regulation 2024/1689) established EU-wide rules for AI systems. However, its effectiveness depends on oversight mechanisms that are particularly complex and intertwined. This blog post aims to examine a recent Polish legislative proposal that introduces a national oversight mechanism under the AI Act.

An article by Paweł Hajduk

National Implementation as a Stress Test

According to the AI Act, Member States must establish: 1) Market Surveillance Authorities (MSAs), which are essential for overseeing the AI Act at the national level following the EU product safety legislation approach; 2) Notifying Authorities, which are vital for the conformity assessment process; and 3) designate Data Protection Authorities (DPAs), explicitly for supervising certain categories of high-risk AI systems involving, inter alia, biometric identification and migration, asylum and border control management (Art. 78(4) AI Act). These authorities must be independent and technically capable. Additionally, Member States must identify fundamental rights authorities (“national public authorities or bodies which supervise or enforce the respect of obligations under Union law protecting fundamental rights”) with access to AI documentation and the ability to request testing through the MSAs concerning specific high-risk AI systems (Article 77 AI Act). The AI Act grants national legislators discretion to designate the competent authorities at the national level, with a deadline of 2 August 2025. The approach of Member States to this obligation varies significantly.

In June 2025, Poland unveiled an updated legislative proposal of the Act on AI Systems establishing the Commission for the Development and Safety of Artificial Intelligence (in Polish: Komisja Rozwoju i Bezpieczeństwa Sztucznej Inteligencji; “KRiBSI” or “AI Commission”). The Commission acts as the market surveillance authority and the single point of contact. Additionally, the Minister for Digital Affairs has been appointed as the notifying authority. This particular role is supported by the Polish Centre for Accreditation, which assists in managing the technical aspects of accrediting conformity assessment bodies. Chronologically, however, the first element of Poland’s AI oversight architecture was the designation of national authorities protecting fundamental rights, as per Article 77 of the AI Act. The list currently includes the Polish DPA, the Patient Rights Ombudsman, the National Labour Inspectorate, and the Ombudsman for Children’s Rights. This step, however, was not without controversy, which will be explored in more detail later.

The following sections critically analyse the Polish AI Commission as a key competent authority in the Polish proposal. It should be noted that work on this legislation is ongoing, and amendments may make these remarks obsolete. The analysis is conducted at a level of generality due to the limited volume of this blog post.

Polish AI Commission: A Brand-New Collegial Authority

KRiBSI is designated as Poland’s primary MSA and single point of contact, as per Chapter 2 “Organisation of supervision over artificial intelligence” (arts 5-29). This is a brand-new collegial body that did not previously exist within the Polish administration.

The AI Commission will consist of the chairperson, two deputy chairpersons, and four permanent members composed of the representatives of: the Office of Competition and Consumer Protection (Urząd Ochrony Konkurencji i Konsumenta), the Financial Supervision Authority (Komisja Nadzoru Finansowego), the Office of Electronic Communications (Urząd Komunikacji Elektronicznej), and the National Broadcasting Council (Krajowa Rada Radiofonii i Telewizji). The chairperson is appointed by the lower chamber of the Polish parliament, with the consent of the upper chamber, for a five-year term and can only be removed ‘for cause’. The chairperson will lead the AI Commission’s work, while deputies are appointed by the chairperson and selected through an open competition process.

The AI Commission will decide by a simple majority of votes, with at least five members present. In the event of a tie, the chairperson has a casting vote. This aims to promote balance and efficiency in decision-making, although sectoral regulators, being members of the AI Commission, still hold considerable influence. Representatives from selected authorities may attend meetings of the AI Commission in an advisory capacity, including the Data Protection Authority, the Patient Rights Ombudsman, the National Labour Inspectorate, and the Ombudsman for Children’s Rights. These bodies do not have voting rights.

According to the explanatory memorandum to the legislative draft (page 65), this proposal does not yet define the final shape of AI oversight in Poland. Further sectoral authorities  may be added or assigned as MSAs once EU-level guidance on high-risk systems is published (which creates further uncertainty).

Internal (In)dependence

One of the questions raised by Poland’s AI Act implementation proposal concerns the independence of the AI Commission. As stated above, it is composed of members drawn from existing regulatory authorities. This hybrid design remains untested. This sectoral embedding may be presented as a strength because it leverages existing expertise and reduces the need to build oversight capacity from scratch. However, it also introduces a structural risk of institutional bias. First, the regulatory authorities represented in the AI Commission may themselves be subject to proceedings under the AI Act, particularly in their capacity as deployers of AI systems. Second, these authorities may pursue regulatory agendas and policy goals that are not fully aligned with the objectives of the AI Act. As a result, their representatives may be less inclined to support certain interpretations that could place additional constraints on sectors of their original remit. These may, in turn, affect the AI Commission’s ability to act with impartiality in addressing cross-sectoral issues.

Although Article 7 of the Polish draft includes conflict-of-interest procedural exclusions, these primarily cover decisions taken in individual administrative proceedings (and, even so, not entirely from a procedural standpoint), rather than systemic concerns about divided loyalties.

The Sidelining of the Polish DPA and Unclear Cooperation

A structural concern is the limited role assigned to the Polish DPA within the Polish oversight architecture. Despite explicit provisions in the AI Act mandating the role of DPAs, Poland’s legislative proposal positions the DPA merely as a “cooperating authority”.

In its opinion from July 2025, the Polish DPA sharply criticised this limited role, emphasising that its relegation to an advisory capacity without voting rights is inadequate. It emphasised that the enforcement of the AI Act’s rules related to data requires not merely cooperation, but meaningful participation in decision-making. It appears that without clear rules governing the Polish DPA’s involvement in these processes, cross-regime enforcement risks becoming ineffective.

The current draft does not include comprehensive rules on how cooperation between the DPA and the AI Commission works, both in individual proceedings and in issuing soft law, including whether and when such cooperation would be obligatory, and what consequences could result from insufficient cooperation. Clearly defining the form, nature, scope, and method of cooperation would promote clarity and mitigate the problem of overlapping mandates, also minimising the risk of disputes over competence. This remark is pertinent not only for the Polish DPA but also for cooperation with other bodies listed in the legislative proposal as “cooperating bodies”. These include, inter alia, cybersecurity authorities, relevant ministers, the Office for Registration of Medicinal Products, Medical Devices and Biocidal Products, the Patent Office, and the Polish Prosecutor General.

It is telling that the sidelining of the Polish DPA started even earlier. When Poland first designated its national authorities protecting fundamental rights under Article 77 of the AI Act, the Polish DPA was absent from the list (however, in the initial draft of the Polish AI Act implementation, the Polish DPA was designated as one of the members of the AI Commission). Its inclusion on the list came only later (May 2025), after letters from the Polish DPA (the first and the second letters). This early omission can be seen as a harbinger of the ambiguity in the legislative approach towards the role of the DPA in overseeing the AI Act.

Controversial “Binding” Opinions

The Polish proposal introduces an intriguing instrument – binding interpretative opinions. It enables businesses to seek formal clarification on how the AI Act and its Polish implementing provisions apply to their specific use cases. Once issued by the AI Commission, the opinion becomes binding not only on the AI Commission itself but also on other Polish authorities.

Despite its appeal, the instrument raises serious concerns. Notably, the fact that these opinions seem binding on other public authorities, including the Polish DPA, raises questions about institutional autonomy. Without clear cooperation rules (as argued above), the binding effect of these opinions may de facto constrain the DPA’s (and other authorities’) decision-making in cases involving (personal) data aspects. This could, in effect, undermine the independence of the DPA. While the law permits KRiBSI to amend or revoke an opinion in light of new guidance from EU-level bodies, it lacks a precise and unequivocal mechanism to ensure that potentially divergent interpretations are reconciled.

The understanding of what actually “binding” means is also problematic. For example, if KRiBSI issues a binding opinion on a system also used in other Member States, and another national authority or the European AI Office takes a divergent view, businesses relying on the opinion could face regulatory liability elsewhere. Moreover, because the AI Act does not regulate national binding opinions and because these opinions of Polish authority can indirectly influence how other authorities interpret the law, the introduction of this instrument risks potentially fragmenting EU enforcement and encouraging forum shopping.

Conclusion

Poland’s proposal for the AI Act’s oversight is noteworthy for its innovative approach, which involves a collegial body comprising key regulatory authorities across domains. However, it raises concerns, particularly regarding the independence of the AI Commission, the role vis-à-vis the Polish DPA, and procedural challenges that impact legal certainty. Additionally, there is a problem of technical and resource capacity, which is equally critical. Recent information highlights political uncertainties about KRiBSI’s ultimate staffing and budget. These concerns remain valid as the latest legislative draft no longer proposes that fines fund the AI Commission’s budget; instead, the revenues will be allocated to the state budget. The original idea was innovative, although it risked causing a conflict of interest because it incentivised the AI Commission to impose fines over other remedies. Regardless of the final shape of the oversight architecture of the AI Act in Poland, it will undoubtedly be intriguing to follow how enforcement will be carried out in such a setting.

Published under licence CC BY-NC-ND. 

  • Paweł Hajduk is an EU-qualified lawyer and a Ph.D. Researcher and Lecturer at the Department of Informatics Law at Cardinal Stefan Wyszyński University in Warsaw.

This Blogpost was written by

Author

  • Paweł Hajduk is an EU-qualified lawyer and a Ph.D. Researcher and Lecturer at the Department of Informatics Law at Cardinal Stefan Wyszyński University in Warsaw.

    View all posts

Paweł Hajduk Written by:

Paweł Hajduk is an EU-qualified lawyer and a Ph.D. Researcher and Lecturer at the Department of Informatics Law at Cardinal Stefan Wyszyński University in Warsaw.