ChatGPT Is Getting on the AI Age Verification Bandwagon

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c

When OpenAI first announced GPT-5.2 last month, it quietly disclosed a new security feature called “age prediction.” Since ChatGPT strictly speaking isn’t exactly an “all ages” tool, it makes sense that users under 18 have protections to protect them from harmful content. The company says that users who indicate they are under 18 already get a modified experience to “reduce exposure to sensitive or potentially harmful content,” but if the user does not voluntarily share their age with OpenAI, how does the company enforce these protections? This is where age prediction comes into play.

How Age Prediction Works for ChatGPT

On Tuesday, OpenAI officially announced its new age prediction policy, which, like other age verification systems used by Roblox, uses AI to guess a user’s age. If the system decides that a particular user is under 18, OpenAI will adjust the experience accordingly, aiming to keep all interactions age-appropriate.

Here’s how it works: The new age prediction model looks at both user behaviors within the app, as well as general account data. This includes things like account age, times of day the user accesses ChatGPT, usage patterns, as well as, of course, the user’s age. said they are. By looking at all of this data, the model determines the likely age of the user. If the model thinks they are over 18, they will get the full experience; if the model thinks they are under 18, they will receive a “safer experience”. If the model isn’t confident, it defaults to this safer experience.

What is restricted in the “safer” version of ChatGPT

This limited experience means that someone the model thinks is under 18 will attempt to reduce the following types of content:

  • Graphic violence or gore

  • Viral challenges that could inspire “risky or harmful behavior”

  • Role play of a sexual, romantic or violent nature

  • Descriptions of self-harm

  • Content promoting “extreme” beauty standards, unhealthy diet, or body shaming

The company says its approach draws on “expert input” as well as literature covering the science of child development. (It is unclear whether how much of this contribution comes from direct interviews and coordination with experts, and how much, if any, comes from independent research.) The company also recognizes “adolescents’ known differences in risk perception, impulse control, peer influence, and emotional regulation” compared to adults.

AI is not always good at predicting age

The biggest risk with each of these age prediction models is that they sometimes get it wrong: hallucination is an unfortunate habit that all AI models share. This goes both ways: you don’t want someone too young accessing inappropriate content in ChatGPT, but you also don’t want someone over 18 stuck with a limited account for no reason. If you encounter the latter situation, OpenAI has a solution for you: direct age verification via Persona. This is the same third party that Roblox uses for its age verification, which hasn’t gone very well so far.

What do you think of it so far?

This doesn’t necessarily spell disaster for OpenAI. Roblox tried to overhaul its age verification system for a massive user base all accustomed to a certain type of multiplayer experience, which prevented users from chatting with other users in newly assigned age categories, which were often incorrect. Meanwhile, ChatGPT’s age prediction only monitors the experience of one user at a time. To this end, OpenAI will allow you to upload a selfie as an additional verification step if the prediction model alone is not enough. Interestingly, OpenAI says nothing about the ability to upload an ID for verification, offered by other companies, such as Google.

I’m not necessarily a fan of age prediction models, as I think they often sacrifice user privacy in the name of creating age-appropriate experiences. But there is no doubt that OpenAI must do something to limit the full ChatGPT experience to younger users. Many ChatGPT users are under the age of 18, and much of the content they encounter is extremely inappropriate, from instructions on how to get high to advice on writing suicide notes. In some tragic cases, miners have committed suicide after chats with ChatGPT, leading to legal action against OpenAI.

I don’t have any good answers here. We’ll just have to see how this new age prediction model affects the user experience for minors and adults, and whether it actually succeeds in creating a safer experience for younger, more impressionable users.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button