This article analyzes the case of Megan Garcia v Character Technologies Inc. & others, a recent landmark case before the Florida District Court. It focuses on three key legal questions: whether (i) AI applications qualify as products under US product liability, (ii) developers like Google can face liability as component part manufacturers for the integration of their large language models into larger systems and (iii) developers have a duty of care towards users arising fromforeseeable risks. The court’s rulings in the motion to dismiss suggest that existing legal frameworks are sufficiently flexible to address AI harms without requiring new doctrines. Finally, this article discusses the immediate implications of these rulings.
An Article by Raqda Sayidali
For years, legal scholars and policymakers have debated how to assign responsibility when AI causes harm. Traditional US tort law struggles to accommodate the unique features of AI systems, like their blackbox nature, capacity for autonomous decision-making, and their operation within complex value chains. These features complicate the attribution of fault and hinder the application of traditional tort principles, such as causation, foreseeability, and control.
The Megan Garcia case, filed in October 2024, is one of the first to test how far existing tort doctrines can go in holding AI developers accountable. The case was brought against Character Technologies, Inc. and Google, among others, after 14-year-old Sewell Setzer committed suicide, allegedly after interacting with Character Technologies’ chatbot “Character.AI”. Character.AI is a chatbot platform that lets users create and converse with customizable AI characters built on large language model (LLM) technology. The case raises many significant questions, including the extent of developers’ duties towards users regarding the use of their AI systems and whether US product liability theories apply to AI harm cases.
The defendants, in response to the case, filed a motion to dismiss, claiming that the case lacked legal grounds. A motion to dismiss is a defendant’s early request to end the lawsuit or certain claims on legal grounds. When a court rules on a motion to dismiss, it only determines whether the claims made by the plaintiff are legally sufficient and it does not resolve factual disputes, nor does it assign ultimate liability. Here, the court ruled on the defendants’ motion to dismiss, allowing most of the claims to proceed to trial. Now the case is in the discovery stage, where each party is required to exchange all the relevant evidence they will present at trial. Unless the parties choose to settle the case, it will go to trial.
This article will explore some of the court’s rulings on the motion to dismiss in the Megan Garcia case: (i) the court’s application of product liability to AI systems, (ii) its analysis of Google’s alleged liability as a component part manufacturer and (iii) whether the defendants had a duty of care towards the users regarding the use of their AI application. After examining the court’s analysis of these issues, the article will discuss the immediate implications that the rulings will have on AI liability going forward.
Whether Character.AI is a product as defined under US product liability
The court evaluated whether the plaintiff plausibly stated a design defect claim under US product liability, based on Character.AI’s alleged features.
This required, firstly, an examination of whether Character.AI met the legal criteria for what constitutes a product and secondly, whether there was a defect. The Restatement (Third) of Torts (an influential legal treatise) defines a product as ‘tangible personal property distributed commercially for use or consumption’, while the Restatement (Second) of Torts lists tangible items as examples of what constitutes a product without providing a specific definition. Other items, analogous to tangible personal property, can also be products. They are analogous to tangible personal property when their use and distribution are similar to the use and distribution of tangible personal property.
Generally, courts do not subject ideas, information, or concepts to product liability. Instead, they separate the containers of ideas, insofar as they are products, from the ideas themselves. For instance, a book of sonnets consists of two parts: the physical book itself (tangible container) and the sonnets (intangible ideas). In cases involving social platforms, like Grindr or Instagram, courts separate the interactions people have on these platforms and what is said from the applications (Apps) that contain these interactions. The Apps are considered analogous to tangible personal property because of how they are distributed and used.
Similarly, in the Megan Garcia case, the court considered the content of Sewell’s conversations with Character.AI as distinguishable from the App. It treated the App as a container of ideas, analogous to tangible personal property, therefore subject to product liability, while the conversations are not. Consequently, the app, lacking functionalities such as an age verification or effective tools to exclude indecent content and deliberate programming of anthropomorphic cues into the system, may be considered for defects as the second requirement. As a result, the court treated Character.AI as a product for the product liability claims arising from the alleged design defects of the App.
Whether Google is liable as a component part manufacturer
Courts consider a component part manufacturer as an entity that designs or supplies a part that is later incorporated into another finished product. According to the Restatement (Third) of Torts, a component part manufacturer is liable under two circumstances: when the component itself is defective and causes harm, or when the component is not defective but the manufacturer substantially participates in the integration of its part into the final product, and the integration renders the final product defective, causing harm.
The court, generally, found that the component part manufacturer liability claims were sufficiently made based on the allegations that Google: (i) contributed intellectual property and AI technology to the design and development of the system, (ii) partnered with Character Technologies, Inc. and granted it access to Google’s cloud infrastructure, (iii) provided the hardware needed to power Character.AI and that (iv) the integration of Google’s LLM into Character.AI was central to the chatbot’s anthropomorphic qualities, making it defective.
Essentially, these allegations show that Google supplied a component, participated in its integration into Character.AI and that allegedly made the chatbot defective, leading it to cause harm. Based on this, the component part manufacturer liability claim was allowed to proceed to trial.
Whether the defendants have a duty of care towards the users of their AI application
To make a successful negligence claim, a plaintiff must first show that the defendant owed a duty to the injured party. The court noted that a legal duty arises whenever an action creates a generalised and foreseeable risk of harm to others. It asked whether (i) the defendant’s conduct created a foreseeable “zone of risk” i.e. the area “surrounding a dangerous condition within which it is reasonably foreseeable that a person might be injured as a result of that condition”, and (ii) if they could control said risk.
Foreseeability asks whether a person could or should have reasonably foreseen the harms that resulted from their actions. Therefore, should the defendants have foreseen the harm suffered in this case, considering what they knew or ought to have known about Character.AI? The court considered the allegations that there is a growing body of research on the dangers of anthropomorphic design, some of which the defendants have authored, and the heightened vulnerability of children to manipulation as well as Google’s internal research on the dangers of Character.AI. Based on these allegations, the defendants knew or should have known the risks Character.AI posed. The court held that the plaintiff sufficiently alleged that the defendants owed a duty of care, grounded in the foreseeable zone of risk they created.
Implications of the court’s Product Liability ruling
By treating Character.AI as a product, the court has cracked open the door to product liability claims against AI developers. However, there is an important caveat; only the product-like aspects of the AI system will be subject to product liability. This is in line with a long-held container/content distinction, separating tangible media from the ideas they convey.
Bringing AI systems within the scope of product liability is a significant development that not only expands avenues for redress but also has meaningful implications for how developers design and deploy their systems moving forward. Product liability has been a useful and responsive regime, capable of adapting to emerging technologies and evolving risk landscapes. From automobiles and pharmaceuticals to mass production and the rise of social media, product liability has been able to drive improvements in product safety. Not only does it provide a robust system of redress, but it also incentivises safer design choices through the threat of liability and post-sale duties to warn.
However, product liability applies only as far as product-like traits exist (see above: intangible personal property). Some proponents of product liability have minimised the importance of first defining AI systems as products. They tend to either ignore the issue entirely or make policy arguments that the product/service distinction should not matter. Many commentators contend that the functional rationale for applying product liability doctrines to physical goods supports extending those doctrines to software and AI. It has also been argued that courts should abandon the tangibility discussion and focus on identifying the cheapest cost avoider, who is best-positioned to prevent harm, and impose liability on them. While these proposals are useful, they ignore a glaring problem that courts seem unwilling to ignore.
Evidently, the definition of a product matters, and this case adds to a long line of precedent from US courts involving cases where the subject matter is not easily classified as a product. Courts often painstakingly distinguish product and non-product aspects of such a subject matter. Defining a product is even more important in instances where the subject matter contains aspects of ideas, expression or information, as most current AI systems do. This is evident in product liability claims against social platforms, where the courts differentiate the app from the social interactions on the app, where only the former is subject to product liability.
Google’s liability as a component part manufacturer
The court’s finding that the component part manufacturer claim against Google may proceed, has implications for AI developers and other upstream actors in the AI supply chain. Google’s LLM allegedly equipped Character.AI with its anthropomorphic qualities, essential to the chatbot’s alleged psychologically manipulative behaviour.
This underscores that AI suppliers participating in the customisation and integration of their components into another product may face component part manufacturer liability if the integration choices create defects. AI companies often customise and integrate their components, such as models or infrastructure, into larger systems. For instance, OpenAI’s long-term partnership with Microsoft involving exclusivity, revenue sharing and support signals deep integration entanglement. Such arrangements of exclusivity, co-design and integration middleware may be the basis for component part manufacturer liability should harm occur. This means that suppliers should ensure their component is safely designed, that it remains safe once integrated into a larger system and that the system it is being integrated into is safe.
The court’s finding also reflects that integration choices that may appear innocuous in isolation, such as enabling a human‐like conversational system, can become legally actionable defects once they contribute to harmful outcomes in a larger product. For instance, anthropomorphism is a common trait in conversational AI, yet in this case, Google may face liability precisely because of the anthropomorphic qualities of its integrated LLM.
Accordingly, AI supply chain actors must anticipate how their components will behave when embedded in larger systems, as integration choices have significant legal consequences. They should be careful about where and how their component is integrated and implement safeguards or usage restrictions to prevent dangerous behaviours downstream. This can be done, for instance, by embedding safety features into their component so that defects do not trace back to them. Alternatively, they could require certain safety tests or content filters to be in place as a condition for integrating their model into consumer-facing products.
Duty of care by the defendants
The court concluded that the defendants have a duty of care towards users such as Sewell Setzer because their alleged conduct created a “foreseeable zone of risk“ posing a general threat of harm to others. This means that AI developers are not expected to foresee the precise sequence leading to a particular harm (e.g. a specific teenager’s suicide). Instead, they should anticipate harms reasonably associated with deploying an anthropomorphic, engagement-maximising system to minors, such as possible addictive use or an increased vulnerability to self-harm. In this sense, AI harms are more scrutinisable, making the oft-invoked black box challenge of AI systems less daunting than sometimes suggested.
It is worth noting, however, that there is no homogeneous standard for foreseeability across different US jurisdictions. Courts adopt different tests, and outcomes might change from state to state, depending on how broad or narrow the interpretation of foreseeability is, even with the same set of facts. The foreseeable zone of risk is a well-established test in Florida courts, which uses a low threshold to establish duty.
In this case, the court relied on the allegations that the defendants knew or should have known the system’s risks, supported by their own statements and research identifying hazardous designs like anthropomorphism. Such evidence may suggest that the defendants knew or ought to have known the risks associated with their system while they deployed it, creating a foreseeable zone of risk. A reasonable developer in their position would foresee that deploying an anthropomorphic, potentially addictive chatbot to children could result in harm. Accordingly, it appears that internal tests, safety evaluations, and independent studies, which reflect the developer’s knowledge of possible risks of the AI system, can serve as evidence of foreseeability, creating a duty to overcome certain risks and can generally support liability claims.
Conclusion
The court’s ruling in the Megan Garcia case marks an important early step in testing how existing tort doctrines apply to AI systems. The decision does not seem to break new doctrinal ground, but instead, it carefully applies long-standing negligence and product liability principles to a novel technology. The significance of the court’s findings lies in how it brings AI systems into the fold of existing tort law, demonstrating that, in fact, it is malleable enough to address the unique challenges posed by AI. While the current ruling is only preliminary, the true significance of the case will become clearer when it proceeds to trial, where the facts, legal questions, and evidentiary standards will be fully tested.
Published under licence CC BY-NC-ND.
