As artificial intelligence (AI) continues to evolve, so do the challenges surrounding its development and security. One pressing concern is reverse engineering—the practice of deconstructing AI models to analyze their structure and functionality. While this technique can enhance transparency, improve interoperability, and expose biases, it also threatens intellectual property rights and trade secrets. As AI companies invest heavily in proprietary systems, the fine line between legal scrutiny and unfair competition becomes blurred. This article explores the legal and ethical implications of reverse engineering AI, questioning where we should draw the line in balancing innovation, accountability, and proprietary protections.
An article by Manasvi Madhumohan and Srividhya Perumal
Introduction
When OpenAI launched its ChatGPT model, experts wasted little or no time before trying to reverse-engineer it. Some of these were excused on the need to understand possible biases the model may have come with, while others found ways of duplicating its capabilities.
Given that it involves taking apart a finished product to study its components and design, reverse engineering – within the context of AI – means analyzing an algorithm, a machine learning model, or the data it relies on to uncover its underlying structure and functionality. While it serves several purposes, it also brings significant risks.
First of all, it raises questions about transparency and fairness. Also, if we can’t understand how AI systems work, how do we hold them accountable for their decisions? And if reverse engineering is a way to achieve that understanding, where do we draw the line between legal scrutiny and intellectual property theft?
Trade secrets encompass confidential information such as formulas, processes, or business strategies that provide a competitive edge. For information to qualify as a trade secret, it must not be publicly known, value must be derived from its secrecy, and it must be safeguarded through reasonable measures. While using services like ChatGPT, users often agree to terms of use that permit companies to use the feedback shared in any way they choose, without any obligation to provide compensation. This creates a paradox where the very tools needed to maintain a competitive edge may also pose a risk to that advantage—a double-edged sword.
Legal Frameworks
At the international level, the TRIPS Agreement (Trade-Related Aspects of Intellectual Property Rights) under the World Trade Organization (WTO) provides a foundational standard for trade secret protection. Article 39 of TRIPS obligates member states to protect undisclosed information from unfair practices, including breaches of confidentiality and industrial espionage. While reverse engineering is not explicitly addressed, it is deemed lawful if conducted through honest practices.
In the European Union (EU), Directive 2016/943 provides a harmonized framework for trade secret protection, defining trade secrets as information that is secret, commercially valuable, and safeguarded by reasonable measures. Recital 16 allows reverse engineering of lawfully acquired products unless restricted by contract, provided the information is not used for unfair competition or direct replication. The EU has rules allowing reverse engineering in some cases, particularly when it helps different systems work together (interoperability) – Directive 2009/24/EC. For instance, if a company reverse-engineers a chatbot to ensure it functions with their own voice assistant, that could be legal. However, if the same process is used to copy the chatbot’s code and sell a competing product, it could breach competition laws. By this, reverse engineering is not exploited for unfair competition or replication of original software. Importantly, contractual terms cannot override the right to interoperability.
In the United States, the Uniform Trade Secrets Act (UTSA) allows reverse engineering as a legitimate method of discovery when conducted through fair and honest means. In Kewanee Oil Co. v. Bicron Corp., the Supreme Court, affirmed that state trade secret laws cannot ban reverse engineering, while Bonito Boats, Inc. v. Thunder Craft Boats, Inc. upheld the public’s right to reverse engineer public domain products. However, End User License Agreements (EULAs) often restrict reverse engineering. EULA is a legal contract between a software provider and the user that governs the terms under which the user can use the software.
Germany takes a stricter stance, often classifying reverse engineering as an unfair commercial practice if it undermines fair competition, though it is permitted under the EU Trade Secrets Directive when conducted for legitimate purposes such as compatibility or innovation. Romanian law, reflecting EU standards, permits reverse engineering unless restricted by contractual agreements.
The TRIPS Agreement, EU Directives and UTSA lack clarity on AI systems. Unlike traditional software, AI systems are dynamic and capable of adaptation and evolution through training, which complicates their classification under conventional frameworks. For instance, while the EU Directive 2009/24/EC permits reverse engineering for interoperability, it primarily targets static software programs, leaving uncertainty about its applicability to AI systems that involve a combination of software, training data, and continuously evolving algorithms. This ambiguity raises critical questions about whether reverse engineering AI systems is lawful under the current legal structures.
In both the U.S. and the EU, EULAs frequently include clauses that prohibit reverse engineering. While such agreements cannot override statutory rights allowing reverse engineering for interoperability, they create a chilling effect. Developers and researchers often avoid lawful reverse engineering due to the threat of litigation or penalties for breaching contractual terms. This is particularly problematic for AI systems, where reverse engineering may not only involve software decompilation but also the examination of datasets and model architectures. The situation is further complicated by the proprietary nature of AI models and training datasets. Reverse engineering these systems may require access to information that companies consider their core intellectual property.
Other countries such as India and China lack detailed legal frameworks specifically addressing reverse engineering of AI systems. This creates an uneven playing field for developers and companies operating globally.
Ethical Considerations
AI models often contain trade secrets developed through substantial research and investment. Reverse engineering to replicate these secrets undermines the legitimate interests of creators and may violate license agreements. While limited reverse engineering can promote competition and innovation, using it solely to free-ride on another entity’s R&D and replicate proprietary technology raises fairness issues and may conflict with legal protections of trade secrets.
Reverse engineering could serve to uncover biases, flaws, or unsafe behavior in proprietary models that impact the public. In such cases, some would argue there is a moral imperative to investigate its inner workings. However, reverse engineering that reveals private data raises ethical red flags around user privacy rights. Even when performed for research purposes, reverse engineering that inadvertently exposes personally identifiable information (PII) or proprietary datasets undermines trust and can cause tangible harm.
Academic researchers often conduct reverse engineering for educational or reproducibility purposes, which can have an ethical basis when there is no commercial intent. When the motivation shifts to commercial exploitation or undermining a competitor’s product, it raises more complex questions about the ethical boundaries of knowledge sharing.
Reverse engineering conducted with explicit approval (e.g., through security audits or partnership agreements) respects intellectual property rights and reinforces ethical integrity. When research uncovers flaws that may harm end users, responsibly sharing this information—ideally in coordination with the model owner—helps protect affected communities. However, if no public harm is at stake, researchers should limit disclosure of proprietary details to respect IP protections.
Implications for AI Developers and Researchers
AI developers and researchers face a difficult balancing act in today’s market, as reverse engineering skills become more powerful. The very technology they are helping to advance is challenging the old protections on which they rely.
First, developers must recognize that AI models, particularly black-box systems, are more vulnerable to reverse engineering than anticipated. Modern machine learning’s capacity to discover patterns and recreate functionality makes it more difficult to secure proprietary algorithms. However, this presents opportunities: reverse engineering can be used to ensure software interoperability, allowing AI systems to integrate easily with existing technology.
Researchers, particularly those at academic institutions, face their own set of obstacles. The scientific principle of repeatability frequently clashes with economic concerns in preserving intellectual property. The ease of reverse engineering using machine learning presents a conflict between academic transparency and preserving innovation. However, reverse engineering provides vital insights into existing algorithms, allowing for upgrades and developments in AI technology.
The increasing simplicity and affordability of reverse engineering affects the economic feasibility. Smaller AI organizations and startups face substantial obstacles since they frequently lack the means to implement advanced security measures or pursue legal action for intellectual property violations. The democratization of reverse engineering via machine learning disproportionately affects these smaller businesses, thereby impeding innovation and competition in the AI industry.
Solutions
The issue can be addressed effectively, with one key solution being the ability to adapt swiftly. This could involve re-evaluating the definition of “improper means” or deviating from the existing legal frameworks.
Further, patent law can be explored as an alternative to trade-secret protection providing a time-limited monopoly in exchange for public disclosure of the invention. Additionally, companies can attempt to control reverse engineering through contractual terms, such as website terms of use or customer agreements.
While reverse-engineering is harmful for IP law, it can also act as a “safety valve” against the potential stifling effects of overly strong trade-secret protection allowing independent development and encouraging further innovation.
Finally, transparency should be of utmost priority. Many current AI systems lack transparency, and addressing this opacity could help reduce the risks associated with reverse engineering.
Conclusion
AI is both a tool for reverse engineering and a target of it, raising fundamental legal and ethical questions. So, policymakers must ensure that regulations keep pace with innovation without stifling competition. Developers, on the other hand, must strike a balance between protecting their proprietary models and contributing to an open AI ecosystem. The challenge ahead is to create a legal framework that fosters both security and transparency in AI-driven industries.
Published under licence CC BY-NC-ND.