Age Verification Is Reaching a Global Tipping Point. Is TikTok’s Strategy a Good Compromise?

Governments around the world are is working to limit children’s access to social media as lawmakers question whether platforms are capable of enforcing their own minimum age requirements. TikTok recently became the latest tech giant to bow to regulatory pressure when it announced it would implement a new age detection system across Europe to prevent children under 13 from accessing the platform.
The system, which follows a year-long pilot in the UK to proactively identify and remove underage users, relies on a combination of profile data, content analysis and behavioral signals to assess whether an account possibly belongs to a minor. (TikTok requires users to be at least 13 years old to register). According to a company statement, its age detection system does not automatically ban users. The system flags accounts it suspects are run by users under 13 and forwards those accounts to human moderators for review. TikTok did not respond to a request for comment.
The European rollout comes amid a global debate over the negative effects of social media on children and as governments debate stricter age-based regulatory approaches. Australia last year became the first country to ban social media for children under 16, including the use of Instagram, YouTube, Snap and TikTok. The European Parliament is also pushing for mandatory age limits, while Denmark and Malaysia are considering a ban on children under 16.
“We are in the middle of an experiment in which American and Chinese tech giants have unlimited access to the attention of our children and young people for hours every day, almost entirely unsupervised,” Christel Schaldemose, a Danish lawmaker and vice president of the European Parliament, said in November during a parliamentary session that, according to Reuters, “called for an EU-wide ban on the access of children under 16 to online platforms, sites sharing videos and AI companions without parental consent and an outright ban for those under 13.”
Advocacy groups in Canada are also calling for the creation of a dedicated regulator to combat online harm affecting young people following the flood of sexualized deepfakes on X by its AI chatbot Grok. ChatGPT recently announced that it is deploying age prediction software to determine whether an account likely belongs to someone under 18 so that appropriate safeguards can be applied. In the United States, 25 states have adopted some form of age verification legislation.
“US legislatures, in the calendar year 2026 alone, are likely to pass dozens, if not hundreds, of new laws requiring online age authentication,” says Eric Goldman, a law professor and associate dean at Santa Clara University, who has argued that any “government-imposed censorship” should automatically be considered “constitutionally suspect.”
“Unless something radically changes,” Goldman says, “regulators around the world are building a legal infrastructure that will require most websites and apps to be age-authenticated.”
As platforms move to properly handle age verification, does TikTok’s strategy of monitoring users instead of banning children outright seem like a good compromise? It depends on how you feel about digital surveillance.
“This is a fancy way of saying that TikTok will monitor the activities of its users and make inferences about them,” says Goldman. Because platform governance is often tied to political motivations and policy solutions sometimes expose children to more harm than help, Goldman calls age verification mandates “laws of segregation and repression.”
“Users are likely not happy with this additional monitoring, and any false positives, such as incorrectly identifying an adult as a child, will have potentially major consequences for the misidentified user.” Goldman adds that while this is the right approach for TikTok, most services don’t have enough data on their users to reliably guess people’s ages, so the approach isn’t really scalable to other platforms.



