ChatGPT Will Now Guess Whether You’re Under 18 to Restrict What You See

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c

OpenAI has started rolling out its age prediction technology to ChatGPT consumer accounts. In a post Monday, the company said that for those under 18 who have not yet provided their age to ChatGPT, its software will look at a person’s behavior and other signals, such as how long the account has been in existence and when they are active, to estimate an age.

If you are misidentified as a minor, you can turn to technology from identity verification service Persona, OpenAI said. This requires a live selfie and government-issued ID. A ChatGPT page that takes you directly to age verification is available.


Don’t miss any of our unbiased technical content and lab reviews. Add CNET as your preferred Google source.


The new ChatGPT system, announced last September as part of a changes for younger usersadds more guardrails to the AI chatbotproviding what OpenAI calls “safeguards to reduce exposure to sensitive or potentially harmful content.”

In a separate support page, the company describes in more detail how age prediction works in ChatGPT and what it filters out. This includes graphic violence or gore; depictions of self-harm; viral challenges “that could lead to risky or harmful behaviors”; role-playing games of a sexual, romantic or violent nature; or content promoting extreme beauty standards, unhealthy diets, or body shaming.

Illustration of the CNET AI Atlas badge; click to see more

CNET

(Disclosure: Ziff Davis, CNET’s parent company, filed a lawsuit in April against ChatGPT maker OpenAI, alleging that it violated Ziff Davis’ copyrights in the training and operation of its AI systems.)

OpenAI and other companies with AI technologies have been criticized and are the subject of multiple lawsuits and investigations related to teenage deaths who interacted with chatbots, including ChatGPT. Last year, OpenAI also announced the addition of additional parental controls to the platform.

Age Verification and age-based access restrictions have become a recurring theme in online experiences, driven in part by laws proposed or passed in various countries and US states. Earlier this month, gaming platform Roblox instituted mandatory age checks. A new law in Australia imposes a broad social media ban for children under 16 years old.

How well will ChatGPT’s age prediction work?

So far, it’s unclear how successful ChatGPT will be at predicting people’s ages among such a large number of users – around 800 million weekly active users – and how quickly it will improve.

Safer is age-verification technology, which has had more time to mature and is generally accurate, says Jake Parker, senior director of government relations at the Security Industry Association.

Modern facial recognition and face analysis tools can work exceptionally well if implemented correctly, Parker says.

“The U.S. government conducts ongoing technical assessment of these technologies through the National Institute of Science and Technology’s Face Recognition Technology Assessment and Face Technology Analysis programs,” he says. “These programs show that at least the top 100 algorithms are over 99.5% accurate for matching (identity verification), even across demographics, while leading age estimation technologies are over 95% accurate.”

Parker said it’s clear that more platforms and services are moving toward age verification and biometric analysis to ensure age-appropriate usage.

Not a complete solution

The focus on technology to protect young people, however, is not “a complete solution,” said Kristine Gloria, chief operating officer of Young Futures, which works closely with teens and educators in entrepreneurship programs.

“We know that generative AI presents real challenges and that families need help to overcome them,” says Gloria. “However, strict monitoring has its limits. To truly move forward, we need to encourage safety by design, where platforms prioritize young people’s well-being as well as engagement.”

Gloria says the right kind of protection for children requires transparency, accountability and a commitment to digital literacy.

“Our goal should be to create environments where security is fundamental, rather than relying on quick technical fixes or band-aids,” she said.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button