Online Age Assurance and Identity Checks: Between Verification and Vulnerability

As lawmakers in the UK, EU, Australia, the U.S. and beyond move toward mandatory age checks for online services, platforms increasingly rely on invasive verification methods involving government IDs and biometric data. The recent Discord–Persona incident illustrates how quickly “teen safety” tools can expose users to novel privacy and security risks. Against this backdrop, privacy-preserving architectures such as Zero-Knowledge-Proof methods, local device-based verification, and the emerging EUDI-Wallet are becoming increasingly central to online age assurance – raising key questions under the GDPR.

Ever since the Online Safety Act of 2023 came into effect in the UK, the question of age assurance for social media or other online content has been controversially discussed in many jurisdictions. Australia, France, certain U.S. states and other countries have passed similar legislation, requiring social media platforms to verify the age of users to access social media contents in the future. Furthermore, the U.S. federal government and the European Union are debating such laws and regulations. This includes updates and guidelines on existing acts, like the guidelines on the protection of minors under the European Digital Services Act (DSA), issued by the European Commission.

The trigger for these regulations is child protection as well as a perceived lack of accountability online. However, reliable age assurance methods often require users to verify their identity with government-issued IDs and facial scans. Such identity verification constitutes processing of sensitive personal data and therefore triggers requirements under privacy laws like the European General Data Protection Regulation (GDPR).

Discord’s Age-Verification Ambition

Discord illustrates this issue. In Australia and the UK, laws already required the platform to verify user ages. In early 2026, Discord rolled out mandatory age verification globally, anticipating similar requirements elsewhere. To do so, it relied on Persona, an identity-verification startup backed by Peter Thiel’s Founders Fund, which already provided ID- and facial-scan-based age checks for companies such as OpenAI, Lime, and Roblox.

This rollout was contentious from the start. To access age-restricted communities, users were asked to submit themselves and their IDs to facial scans. Persona would then use the data collected to verify the identity, and thereby the age, of the user. But the situation escalated dramatically when security researchers uncovered nearly 2,500 internal Persona files exposed on a U.S. government-authorized server. These files revealed that Persona’s systems went far beyond simple age and identity checks. They included not only mere IDs and pictures for verification purposes, but even more advanced data like facial recognition models, watchlist screening mechanisms and checks for politically exposed persons. Soon after, hacktivists claimed they had breached Persona’s systems, gaining insight into how biometric data was processed. Their findings reinforced the view that the company’s data practices were broader than publicly disclosed and showed that Persona’s front-end security was insufficient.

These revelations triggered a wave of criticism. Privacy advocates questioned why a teen-safety tool appeared to contain components associated with surveillance and financial intelligence. Discord, facing mounting pressure, ended its relationship with Persona soon after.

GDPR Requirements on Age Verification Processes

  • Age verification involves processing personal data, often of a sensitive nature, and must therefore comply with data privacy laws. In the EU, the core principles under Art. 5 GDPR apply: lawfulness (requiring a valid legal basis, e.g. explicit consent for biometric data under Art. 9 GDPR), purpose limitation (data may only be used for the specified verification purpose), data minimization (only data strictly necessary may be collected), storage limitation, and integrity and confidentiality (appropriate technical and organizational safeguards must be in place). 
  • Where a platform engages a third-party verification provider, such as Persona, to process identity and biometric data on its behalf, but that provider also processes the data for its own purposes (e.g. fraud detection, model training, or service improvement), does the provider act as an independent controller, or as a joint controller within the meaning of Art. 26 GDPR, and what are the practical consequences of that classification for contractual structuring and liability allocation?
  • To what extent must online platforms document the rationale behind their choice of verification method in order to satisfy the accountability principle under Art. 5(2) GDPR, and does the failure to evaluate less data-intensive alternatives itself constitute a compliance deficit?

Practical Implementation and EUDI-Wallet

Age verification presents a structural tension: effective verification (particularly against fraudulent identity documents) tends to require more data, while data protection law demands that collection be kept to the minimum necessary. Some verification methods carry less compliance risk than others – notably Zero-Knowledge-Proof-based cryptographic attestation schemes operated by trusted third parties (allowing age verification without disclosure of further attributes) or device-based verification mechanisms that minimize data transmission.

A particularly significant example of a Zero-Knowledge-Proof-based approach is the forthcoming European Digital Identity Wallet (EUDI-Wallet), provided as an application, allowing users to prove age attributes without revealing underlying personal data[JH2] . In using the EUDI-Wallet, an issuer (e.g. a government authority) provides a digital, cryptographically signed certificate (e.g. a digital ID) to a wallet holder. When the wallet holder wishes to verify certain information (e.g. being of age) towards a service provider (e.g. a social media platform), the service provider sends a cryptographically signed request to the wallet holder. This request can be for as little or as much information as necessary (e.g. a request for being of age will only return yes/no, not the age or the date of birth) and is matched to the wallet only by a one-time identifier, so that the identity of the wallet holder can stay anonymous. Additionally, the requests are logged only locally in the wallet, meaning that the issuer does not know with whom the wallet holder shared their information. 

The EUDI-Wallet represents a paradigm shift: away from centralized verification services (such as Persona) toward user-centered, self-managed identity. Only the user holds all information, decides what to share, and knows with whom. With the EUDI-Wallet, the EU positions itself as a global pioneer, putting pressure on less privacy-friendly alternatives. Self-managed identity may become the standard in the EU, with other jurisdictions potentially following toward a more user-centric architecture.

Published under licence CC BY-NC-ND. 

  • Julius Remmers is a lawyer at Taylor Wessing in Germany and a certified specialist in information technology law. He advises clients on data protection law, IT and technology contracts, e‑commerce, and digital transformation. He holds a Master’s degree in Innovation, Technology and the Law from the University of Edinburgh and a PhD in media and IT contract law from Leibniz University Hannover.

  • Philipp Heussel works as legal research assistant at Taylor Wessing Germany in the team Technology, Media and Telecommunications. He holds an LL.B. from Bucerius Law School and just completed his First State Examination in Law. Outside of work, his interests focus on privacy, cybersecurity, and international AI regulation, as highlighted by his previous research regarding mental health chatbots.

Authors

  • Julius Remmers is a lawyer at Taylor Wessing in Germany and a certified specialist in information technology law. He advises clients on data protection law, IT and technology contracts, e‑commerce, and digital transformation. He holds a Master’s degree in Innovation, Technology and the Law from the University of Edinburgh and a PhD in media and IT contract law from Leibniz University Hannover.

    View all posts
  • Philipp Heussel works as legal research assistant at Taylor Wessing Germany in the team Technology, Media and Telecommunications. He holds an LL.B. from Bucerius Law School and just completed his First State Examination in Law. Outside of work, his interests focus on privacy, cybersecurity, and international AI regulation, as highlighted by his previous research regarding mental health chatbots.

    View all posts

Julius Remmers is a lawyer at Taylor Wessing in Germany and a certified specialist in information technology law. He advises clients on data protection law, IT and technology contracts, e‑commerce, and digital transformation. He holds a Master’s degree in Innovation, Technology and the Law from the University of Edinburgh and a PhD in media and IT contract law from Leibniz University Hannover.