Leveling Up Regulation: When the CHAT Act Enters the Game

This article analyses the proposed US “Children Harmed by AI Technology Act” (CHAT Act), highlighting that its broad definition of “companion AI chatbots” could unintentionally regulate AI-powered video game characters. It argues that while protecting minors from harmful AI interactions is vital, overly expansive regulation may impose costly compliance burdens, stifle creative innovation, and misclassify (comparably) harmless entertainment products as emotional companions.

An article by Tarmio Frei, Ingemar Kartheuser, and Greta Sparzynski

In Washington, few issues generate bipartisan urgency like protecting children online. After headlines about teens forming unhealthy bonds with “AI companions” and tragic suicide cases linked to unfiltered chatbots, US lawmakers moved to act. Alongside the AWARE and GUARD Acts, one prominent proposal is the Children Harmed by AI Technology Act (CHAT Act) – a measure designed to shield minors from sexually explicit or harmful AI systems. 

The intent is commendable. The danger lies in its reach. By defining a “companion AI chatbot” as any software-based artificial intelligence system or program that exists for the primary purpose of simulating interpersonal or emotional interaction, friendship, companionship, or therapeutic communication with a user, the bill could sweep far wider than the Replika-style chatbots that inspired it. 

The language is so expansive that it could classify the talking party member in a role-playing game or Fortnite’s AI-powered Darth Vader as a regulated “companion”. This could expose many video game publishers to costly compliance duties designed for a very different kind of risk.

The CHAT Act 

Under the bill, every “covered entity” that makes available a companion AI chatbot to people in the US must comply with strict rules. Among them: 

  • Account creation and verification. Each user must create an account before accessing any chatbot functions. 
  • Age verification. Every account must undergo an age verification process using a “commercially available method or process that is reasonably designed to ensure accuracy”.
  • Confidentiality of age verification data. Companies must protect the confidentiality of collected data, limiting its use to what is strictly necessary for age verification, parental consent, or compliance records. 
  • Parental oversight. If a user is under 18 years, the account must link to a verified parent or guardian who must grant consent before the child can interact with the AI.
  • Content monitoring. Developers must detect “suicidal ideation”, provide contact information for the National Suicide Prevention Lifeline, and block minors from chatbots engaging in “sexually explicit communication”. 
  • Transparency. Every 60 minutes the system must remind users they are talking to a machine, not a human. 

Each of these requirements may seem reasonable for explicit or emotionally immersive chatbots – though some debate whether the legislative approach is too sweeping, particularly around age verification and potential First Amendment implications. In a gaming context, these obligations could introduce unnecessary complexity and cost. 

Where “Interpersonal Interaction” Goes Too Far 

The problem begins with the term “interpersonal interaction”, which is neither defined by the CHAT Act nor any other relevant federal statute. Since the wording is broader than the other listed categories – emotional interaction, friendship, companionship, or therapeutic communication – it could become the decisive factor in determining coverage. 

Interpersonal interaction is commonly understood as verbal or non-verbal exchange between two or more individuals. By that logic, any AI system primarily designed to communicate naturally with users could be seen as simulating interpersonal interaction

Such reading could easily encompass AI-powered video game characters. Many games use AI to make fictional worlds and non-player characters (NPCs) responsive and believable. Innovative features, like Fortnite’s AI-powered Darth Vader, show how large language model (LLM)–based NPCs can let players interact and role-play with familiar characters in dynamic and entertaining ways. Research indicates that LLMs make NPCs “more engaging and realistic”, reduce “repetitive discourse”, and create “a more explorative experience within the game”. Fortnite’s Darth Vader, for instance, not only assists players on the virtual battlefield, but also engages in real-time voice chats – commenting on in-game events, such as the player’s outfit, game status, or nearby loot, and even discussing Star Wars–related topics. Similarly, LLM-based AI systems can be used as commentators or narrators, retelling in-game events much like sports broadcasters describing a football match. 

While a few AI-powered game characters may resemble chatbots like Replika or Character.AI – consider, for example, the AI boyfriends in the Chinese 3D otome game Love and Deepspace – most do not. Instead, they employ AI in tightly defined contexts, limited to specific roles, activities, or storylines within the game. Their purpose is not to form emotional bonds or build lasting human–AI relationships as would be typical for artificial companions. In many cases, they aren’t even capable of this, because of technical guardrails or the absence of memory from prior interactions as in the case of Fortnite’s Darth Vader. 

That does not mean AI-powered NPCs are entirely risk-free. Players have managed to circumvent some of Darth Vader’s safeguards leading him “to say some questionable things” – from swearing at players to awkward responses to inquiries about “mommy ASMR”. Still, the main risks associated with AI NPCs involve privacy, occasional toxicity or biased output, and inaccurate references to in-game events. These differ fundamentally from the dangers the CHAT Act targets: (early) exposure to sexually explicit content or encouragement of self-harm. Extending the law to cover LLM-based NPCs or in-game narrators simply because they simulate interpersonal interaction would be unjustified. For precisely this reason, California’s SB 243, for instance, exempts most AI-powered video game characters from its AI companion definition. 

There is, however, room for a narrower interpretation. Courts could recognize that the legislative intent behind the CHAT Act was to address systems capable of forming deep, long-term emotional or social bonds or generating sexually explicit material, and not every narrative or entertainment AI in video games. The Federal Trade Commission could also issue clarifying guidance on how to interpret “interpersonal interaction”. Yet, as written, the statute leaves open serious doubtas to whether it distinguishes a fantasy sidekick from a therapeutic or romantic chatbot. 

Compliance Nightmares for Game Studios 

If the CHAT Act becomes law and the broad interpretation prevails, the law would impose obligations that only larger video game studios could manage. Age-verification systems require sensitive data, such as IDs or facial scans – information that must be stored, secured, and potentially audited. That introduces privacy and cybersecurity risksalongside significant financial costs

A coalition of technology and consumer-policy groups warned the Senate that such verification schemes “would endanger the privacy and data security of children and families nationwide”. They pointed to recent breaches at identity-verification providers and to the United Kingdom’s experience with mass age-checks, which drove users to adopt VPNs to evade the rules. 

Larger publishers might absorb these expenses. Smaller developers likely cannot. Indie studios – short for independent video game studios – often lack the infrastructure or financial possibilities to implement robust verification systems or manage personal data responsibly. Requiring them to track user ages, maintain parental-consent records, and monitor chat logs for specific content could easily exceed their technical and financial capacity.

That imbalance risks entrenching incumbents. Heavy compliance burdens tend to consolidate industries around large firms while squeezing out smaller innovators – the very developers advancing creative storytelling and experimentation. 

Beyond US Borders — A Global Ripple Effect 

The CHAT Act’s reach would not stop at the US border. Its (extra)territorial scope covers any company that “makes available” a covered chatbot to users in the United States, regardless of where that company is based. This could include publishers from Japan, China, or the EU. 

For multinational publishers, compliance might mean creating US-specific versions of games that disable or limit AI-driven dialogue or implementing costly compliance systems. For smaller studios, the risk of enforcement or the cost of compliance could make the US market effectively off-limits for AI chatbot-driven video games. 

Toward A Smarter Approach 

The challenge before lawmakers is to protect minors from harmful AI companion interactions without stifling creativity, compromising privacy, or imposing heavy compliance burdens on low-risk uses of AI. Treating all systems as seductive chat partners misinterprets both the technology and its social function. Narrower drafting could achieve that balance. 

For video game applications, California’s SB-243 offers a promising model. The bill excludes certain bots that are “a feature of a video game and […] limited to replies related to the video game that cannot discuss topics related to mental health, self-harm, sexually explicit conduct, or maintain a dialogue on other topics unrelated to the video game.” 

Adopting a similar approach at the federal level would help avoid regulatory overreach into creative industries and prevent strict and costly compliance obligations for contexts where the risks of psychological harm or exposure to sexually explicit content are limited. Additionally, it would ensure that systems that are capable of exposing users to sexually explicit material, promoting self-harm, or fostering maladaptive long-term emotional attachment remain subject to appropriate safeguards.

In practice, such a carve-out would keep (at least relatively) harmless, gameplay-focused AI video game AI companions, such as Fortnite’s Darth Vader outside the scope of heavy regulation, while ensuring that genuinely risky systems such as in-game AI boyfriends or girlfriends fostering long-term human-AI-relationships with players are covered by clear protective standards.

If the United States wants to lead responsibly on AI governance, it must lead with precision and proportionality. Not every AI that talks back is a companion – and not every companion is a danger.

Published under licence CC BY-NC-ND. 

  • Tarmio Frei is a Research Assistant and doctoral student at Bucerius Law School in Hamburg, Germany, where he works with Prof. Dr. Linda Kuschel on law and digitalization. His dissertation focuses on the regulation of anthropomorphic conversational agents under EU law, with a particular emphasis on AI companions. His broader research interests include technology regulation, data protection, games law, and the legal dimensions of open-source hardware.

  • Dr. Ingemar Kartheuser, LL.M. is a technology and data privacy lawyer located in Hamburg and Frankfurt. He focuses on outsourcings, tech projects, data privacy and cyber security matters. He advises on large-scale projects in the technology area and has special interest in the healthcare sector where he has advised a number of multi-national clients. He is also an expert in e-commerce and IT issues in M&A transactions (e.g. carve-outs).

  • Greta Sparzynski is a Research Assistant and doctoral student at the Center for Transnational IP, Media and Technology Law and Policy at Bucerius Law School. Her dissertation examines copyright law from an interdisciplinary perspective, combining law, art, and art history. Her broader research interests include intellectual property regulation, AI law, games law, and digitalization, as well as open-source hardware.

Authors

  • Tarmio Frei is a Research Assistant and doctoral student at Bucerius Law School in Hamburg, Germany, where he works with Prof. Dr. Linda Kuschel on law and digitalization. His dissertation focuses on the regulation of anthropomorphic conversational agents under EU law, with a particular emphasis on AI companions. His broader research interests include technology regulation, data protection, games law, and the legal dimensions of open-source hardware.

    View all posts
  • Dr. Ingemar Kartheuser, LL.M. is a technology and data privacy lawyer located in Hamburg and Frankfurt. He focuses on outsourcings, tech projects, data privacy and cyber security matters. He advises on large-scale projects in the technology area and has special interest in the healthcare sector where he has advised a number of multi-national clients. He is also an expert in e-commerce and IT issues in M&A transactions (e.g. carve-outs).

    View all posts
  • Greta Sparzynski is a Research Assistant and doctoral student at the Center for Transnational IP, Media and Technology Law and Policy at Bucerius Law School. Her dissertation examines copyright law from an interdisciplinary perspective, combining law, art, and art history. Her broader research interests include intellectual property regulation, AI law, games law, and digitalization, as well as open-source hardware.

    View all posts

Tarmio Frei is a Research Assistant and doctoral student at Bucerius Law School in Hamburg, Germany, where he works with Prof. Dr. Linda Kuschel on law and digitalization. His dissertation focuses on the regulation of anthropomorphic conversational agents under EU law, with a particular emphasis on AI companions. His broader research interests include technology regulation, data protection, games law, and the legal dimensions of open-source hardware.