The Appleton Times

Truth. Honesty. Innovation.

Technology

Age verification is a mess but we’re doing it anyway

By Emily Chen

about 6 hours ago

Share:
Age verification is a mess but we’re doing it anyway

Global laws mandating online age verification for child safety are driving platforms to adopt flawed technologies that compromise user privacy, from AI inference to face scans and app-store checks. Experts propose cryptographic alternatives like zero-knowledge proofs, but current systems remain imperfect amid legal challenges and fragmented regulations.

In recent years, governments around the world have ramped up efforts to protect children online, leading to a wave of laws requiring age verification on digital platforms. From the United Kingdom and Australia to France, Brazil, and several U.S. states, these regulations aim to block minors from accessing pornography, harmful social media, or other inappropriate content. Yet, as platforms like Meta, Google, and Discord implement these systems, experts warn that the available technologies come with significant privacy risks and technical shortcomings, forcing users into uncomfortable tradeoffs.

The rapid adoption of age-gating measures has transformed the internet landscape. What began as conceptual discussions just a few years ago is now standard practice on major sites, with laws spreading globally. In the U.S., states such as Texas, Louisiana, California, Tennessee, Florida, and Virginia have enacted varying requirements, some targeting platforms directly and others shifting the burden to app stores. Internationally, similar mandates have taken hold, but the methods for verifying age remain fraught with issues, according to privacy advocates and tech executives.

One common approach is age inference, where artificial intelligence analyzes existing user data to estimate age without requiring new information. For instance, Meta employs an AI system on Instagram to detect teens and apply stricter account settings. Google and YouTube scan accounts for signs of users under 18, while Discord plans to introduce its own inference tool later this year. These systems consider factors like account creation date—if a user joined Instagram 18 years ago, they are likely an adult—or more nuanced signals, such as video search patterns on YouTube or birthday mentions in posts on Instagram.

Cobun Zweifel-Keegan, managing director at the International Association of Privacy Professionals, explained the appeal of this method in an interview with The Verge. “You don’t need to know who someone is in order to figure out their age, so that’s why, in theory, age inference technologies can be less privacy invasive,” Zweifel-Keegan said. Discord has emphasized that most users won't face additional checks thanks to its AI, highlighting the goal of minimizing data collection.

However, age inference isn't foolproof and often falls short of regulatory standards. When the AI can't confidently determine a user's age—or mistakenly flags an adult as a minor—platforms resort to more invasive verification. This leads to the next layer of challenges: collecting personal data to confirm adulthood.

Government-issued photo IDs offer high accuracy but pose severe risks if breached, as has occurred in multiple incidents. To avoid building their own systems, companies turn to third-party providers like k-ID, Persona, and Yoti, which handle verification and share some liability. Still, these services store sensitive details, creating ongoing security concerns for users.

Face scanning has emerged as a popular alternative, estimating age from a user's photo without needing legal documents. This can even occur on the device itself, preventing data from being sent to servers. But the Electronic Frontier Foundation has criticized these tools for inaccuracies, particularly affecting people of color and women, and for vulnerabilities like using images from video games—such as the character Sam Porter Bridges from Death Stranding—to bypass checks.

Rick Song, CEO of Persona, acknowledged the limitations of on-device processing during a discussion with The Verge. “If we could run [on-device] effectively on a phone, I would do it in a heartbeat, but it’s not currently feasible,” Song said. He noted that server-side verification is more secure against tampering, even though it involves transmitting data. Additionally, older devices, like pre-iPhone 10 models or legacy Android phones common in many regions, lack the power to run advanced AI, forcing users to upload IDs instead and exacerbating privacy divides.

Lawmakers have increasingly looked to device manufacturers and app stores to streamline age checks. Proposals supported by companies including Meta, Spotify, Match, and Garmin advocate for a single verification point at the app store level, such as Apple's iOS App Store or Google's Play Store. Under California's Digital Age Assurance Act, set to take effect in 2027, operating systems like Windows, macOS, and Linux must prompt users for birth dates during setup and relay age-range signals to apps.

This approach complicates matters for open-source systems. Privacy-focused GrapheneOS, a variant of Android, has stated it will not implement mandatory age verification, even if it means devices can't be sold in certain regions. Developers of Linux distributions are still navigating these rules, with uncertainty over whether they apply to repositories like APT or Pacman. Meanwhile, states like Texas and Louisiana mirror California's app-store focus, while Tennessee, Florida, and Virginia target platforms directly.

These fragmented laws face legal hurdles, particularly under the U.S. First Amendment. Federal courts have blocked several measures on free speech grounds. Zweifel-Keegan pointed out the tension: “It’s not difficult for companies to estimate or verify ages using various technologies. They have all sorts of different tools in their tool belts, but it is difficult for the [US] government to require it. Once the government starts saying you have to do it, it actually starts to be subject to First Amendment scrutiny. And so far, we haven’t seen that survive particularly well.”

In response to these challenges, some platforms like Discord and Roblox have voluntarily adopted verification systems beyond legal requirements, accepting the tradeoffs. Globally, the patchwork of rules—varying by country and even U.S. state—complicates compliance for multinational companies.

Privacy experts are exploring alternatives to reduce data exposure. Zero-knowledge proofs (ZKP), a cryptographic technique, allow users to prove they are over 18 without revealing specifics. France's data privacy agency demonstrated this in 2022, where government issuers provide age proofs storable in digital wallets for use across sites. Google supports ZKP development, though researchers at Brave caution that improper implementation may not truly preserve privacy, potentially narrowing age ranges through repeated verifications—for example, proving ages between 20 and 22, then over 21, could pinpoint a birth date.

The European Union is advancing an open-source age verification app, described by Commission President Ursula von der Leyen as “technically ready.” Users would upload IDs, passports, or verify via banks or schools against an EU-approved list, generating traceless proofs for restricted platforms. ZKP remains an experimental feature for now.

Other innovations include creating stable cryptographic keys from initial checks for reuse, as suggested by the Future of Privacy Forum. Daniel Hales, a policy counsel with the FPF, told The Verge that such systems could pair with browser-stored credentials to cut down on repeated verifications and prevent credential sharing on devices. “This can reduce the amount of age checks, but it can also mitigate the risk of shared devices or a certain credential for one person being shared among multiple people,” Hales said.

Despite these promising concepts, they remain in development, leaving current systems as imperfect stopgaps. As of 2026, the internet is increasingly segmented by age gates, but the balance between child safety and user privacy eludes policymakers and tech firms alike. Hales emphasized the need for careful consideration: companies and lawmakers must navigate “the balancing act of privacy and safety.” Until better solutions emerge, users worldwide face heightened surveillance risks in the name of protection, with ongoing legal battles and technological experiments shaping the path forward.

Share: