“Vibe coding” – when AI generates entire software systems from natural language prompt – is the latest trend in tech. But as the EU’s new Cyber Resilience Act (CRA) rolls out, a question arises: Can this law keep pace with AI’s rapid, often unpredictable, code generation?
An article by Carolin Kemper
What is vibe coding?
There’s a new kind of coding I call “vibe coding”,
where you fully give in to the vibes, embrace exponentials,
and forget that the code even exists.
Andrej Karpathy on X.com, 03.02.2025
Vibe coding refers to writing software with Large Language Models (LLMs) based on prompts: it lets developers describe software in plain natural language, leaving AI to handle everything from architecture to debugging. Tools like GitHub Copilot already assist coders by suggesting lines of code, but vibe coding takes this further. Vibe coders have no idea about the actual code and cannot guarantee its quality. If they have questions, they ask their coding “agent” to explain the code. While tech companies hail it as a cost-saver, critics warn it invites security risks.
The Security Risks of Vibe Coding
Vibe coding introduces severe security risks: Generative AI is known to come up with untrue information due to its “autocomplete-like” functioning often referred to as “hallucinations”. This becomes dangerous when plausible but non‑existent software packages and libraries (i.e. other software resources ) are (repeatedly!) referenced in the code – and malicious actors create software resources based on these outputs (“slopsquatting”). Vibe coders might unknowingly add malware to their apps, as manually checking every AI-suggested package defeats the point of automation.
Additional security issues exist. Since AI-coding assistants are trained with data from open-source repositories like GitHub, their models might embed widespread but insecure coding practices and historical data (especially deprecated, i.e. outdated, libraries). Furthermore, AI-generated code often generates new code instead of reusing or refactoring existing code, leading to “code bloat” and technical debt.
For now, vibe coding is used primarily for hobby projects. Security issues would (mainly) affect the vibe coders themselves. Nevertheless, it is just a matter of time until vibe code seeps into widely distributed applications. Software developers and companies might, for example, turn to vibe coding for quick and cost-saving results.
The Legal Risks of Vibe-Coded Products
The inherent security risks of vibe coding may expose organizations to legal consequences. As regulatory frameworks evolve, software quality assurance is no longer just a best practice—it’s increasingly becoming a legal obligation. The Cyber Resilience Act (EU) 2024/2847 (CRA) stands at the forefront of this regulatory shift, establishing comprehensive cybersecurity requirements for software-based products entering the EU market (hobby projects for personal use remain exempt).
The CRA stipulates requirements for “products with digital elements” which includes software, i.e. “software or hardware product[s] and [their] remote data processing solutions, including software or hardware components being placed on the market separately” (Art. 3 (1) CRA). As a consequence, any software-based product is governed by the CRA (unless it is a cloud-based service, Recital 11 CRA) and must be “designed, developed and produced“ in accordance with essential cybersecurity requirements” (Art. 6 (a), Art. 13 (1) and Annex I Part I CRA).
Ensuring “an appropriate level of cybersecurity” is the basic requirement (Annex I Part I (1) CRA). Whether the level of cybersecurity is appropriate with regard to the risks of the product is to be determined in a risk assessment process (Art. 13 (2), (3) CRA). This process must be “documented and updated” (Art. 13 (3) CRA). Such a risk assessment must take into account the “health and safety of users” (Art. 13 (2) CRA). It should also comprise an analysis of risks based “on the intended purpose and reasonably foreseeable use, as well as the conditions of use” and “the operational environment or the assets to be protected” (Art. 13 (3) CRA).
AI-generated risk assessment?
Vibe coding enthusiasts might wonder whether it is possible to prompt an AI agent to carry out a risk assessment – including recommendations towards managing the risks posed by the product. Meta is planning to use AI for automating risk assessments. The next logical step after vibe coding is vibe compliance.
Assessing risks entails more than checking boxes. It involves understanding potential threats, estimating resulting damages, comparing them to preventive measures, balancing expected implementation costs, and eventually choosing an appropriate level of protection (cf. Bambauer/Hurwitz/Thaw/Tschider, Cybersecurity: An Interdisciplinary Problem, 2021, at 85 ff.). It constitutes the basis for responsibility and accountability (as well as potentially liability).
With the Model Context Protocol introduced by Anthropic, it would be possible to integrate local data as context to code-generating LLMs. This would allow manufacturers to include contextual data on business operations into their risk assessment, thereby enriching it. Manufacturers can thus generate risk assessments, feed the results back into code development, and have the coding agent produce documentation.
The CRA does not explicitly demand manual intellectual involvement. Rather, the relevant criterion is the outcome: Either damage occurs or noncompliance is detected by market surveillance authorities, e. g. while exercising their powers to investigate product compliance (Art. 14 of Regulation (EU) 2019/1020). As long as compliance with cybersecurity requirements is feigned, with working code and proper risk assessment and documentation, inappropriate cybersecurity – will hardly attract attention before damage occurs. In particular, the mandatory conformity assessment (Art. 32 CRA) may not have a great impact on compliance.
Conformity Assessment – The Toothless Tiger?
Most conformity assessments will be undertaken by the manufacturer in an internal control procedure (Art. 32 (1)(a) CRA), especially if manufacturers apply harmonised standards, common specifications or European cybersecurity certification schemes (see Art. 32 (2) CRA). One could try and request AI coding agents to generate software in compliance with these norms. But compliance will not be guaranteed.
Subsequently, the necessary technical documentation (Art. 31 CRA) could also be generated. Afterall, AI agents are capable of explaining their own code. Vibe-coding manufacturers could ask their AI agent to generate a document containing “all relevant data or details of the means used by the manufacturer to ensure that the product with digital elements and the processes put in place by the manufacturer comply with the essential cybersecurity requirements” set out in Art. 31 (1) CRA.
The standard becomes higher if vibe code is contained in a product of a higher risk category, i.e. important or critical products (cf. Art. 7 and Art. 8 CRA respectively). The conformity assessment procedures will be stricter and may require full quality assurance (Annex VIII module H) – begging the question if notified bodies will be able to detect vibe-coded technical documentation.
Automated vulnerability handling?
The more specific requirements of the CRA demand more than vibe coding can guarantee: Be it the state of the product “without known exploitable vulnerabilities” (Annex I Part I (2)(a) CRA), the protection of the confidentiality, integrity and availability of data and essential functions (Annex I Part I (2)(e), (f), (h) CRA), or the minimization of attack surfaces (Annex I Part I (2)(j) CRA): vibe code that has a penchant for “hallucinations” and tends to be bloated will hardly realize this level of security. To ameliorate these issues, vibe coders could prompt their coding agent to “self-debug” – but this is unreliable and often tedious. Automated testing in general is limited in scope and only able to test against predefined standards and benchmarks. And the market is riddled with security tools that overpromise all-encompassing security but deliver only specific functionalities. Moreover, automated testing tools produce reports to prove the efficacy of their security processes, contributing to the “security theatre” with visible, somewhat performative, security checks – irrespective of their accuracy or effectiveness. Automation cannot replace human oversight and control: The highly limited testing capabilities of AI complement, but do not supersede, expert review.
Cybersecurity Regulation at a crossroads
The CRA theoretically draws a line for vibe coding. AI-assisted software development is not precluded as long as manufacturers comply with their obligations, inter alia to conduct a risk assessment and implement cybersecurity requirements. The Achilles’ heel of vibe-coded software will be the detection and handling of vulnerabilities: Here, expert review is necessary for reliable software quality assurance. The problem with vibe coding is that it incentivizes overreliance on the coding agents’ output – and thorough code review and quality assurance may be omitted in an effort to save time and costs.
The CRA is not hostile to AI-based software development. In fact, it prescribes tools that could help tackle the threat of slopsquatting: It requires manufacturers to identify and document components contained in products with digital elements, including by “drawing up a software bill of materials” (SBOM, Annex I Part II (2) CRA), i.e. a formal record of components included in the software of a product with digital elements (Art. 3 (39) CRA). This can be done automatically, but a rigorous (manual) verification of the legitimacy of software packages is still necessary to spot malicious third-party software.
The regulatory impact of the CRA will most likely have a limited in effect in practice: None of the essential cybersecurity requirements matter, if they are not vigorously checked and enforced. Technically, it is possible to automate and generate the entire product life cycle – from designing and coding software to risk assessment, quality assurance and vulnerability handling, albeit with error-prone, “hit or miss” software as a result. The problem is: Only in the event of actual damage will the software’s flaws become apparent and trigger liability (cf. the new Product Liability Directive (EU) 2024/2853). As a consequence, liability will be the primary remedy against insecure vibe code in software products, whereas regulatory requirements lack strict preventive controls and checking mechanisms.
Conclusion
While the CRA encourages the development of secure software – and should dissuade software developers from vibe coding – it does not unambiguously interfere with the “surrender to the vibes”-approach. Even though vibe code will most likely not meet the CRA’s requirements, it may successfully feign compliance. We will see whether the CRA will be effective in fostering a higher level of cybersecurity in this dawning era of AI-generated software – or if the quality and the security of software will take a turn for the worse.
The author expresses her sincere gratitude to Michael Kolain and Sebastian Raible as well as Gabriel Udoh.
Published under licence CC BY-NC-ND.