Google’s AI, Gemini, is ‘high risk’ for kids and teens, safety report finds

You may want to think twice before letting your children use Google Gemini.
A new non -profit media security report by Common Sense revealed that the IA tool of the research giant, Gemini, has a “high risk” for children and adolescents. The evaluation revealed that Gemini presented a risk for young people despite the fact that Google has a “less than 13” and an “teenage experience” for Gemini.
“While gemini’s filters offer some protection, they always expose children to inappropriate materials and do not recognize serious symptoms of mental health,” said the report.
Mashable lighting speed
The safety assessment presented a mixed bag of results for Gemini. He would sometimes have, for example, to share “equipment linked to sex, drugs, alcohol and dangerous mental health advice”. “However, he clearly told children that it is a computer and not a friend – he would not pretend to be a person either. Overall, Common Sense media found that the Gemini” Under 13 “and” Teen Experience “were modified versions of Gemini and not something built from zero.
44 State prosecutors serve an opinion on AI companies: protect our children – or else
“Gemini obtains correct bases, but that stumbles into details,” said Robbie Torney, principal director of Common Sense Media AI programs, in a statement. “An AI platform for children should meet them where they are, not a unique approach for children at different stages of development.
To be clear, Gemini are far from being the only AI tool that presents security risks. Overall, common sense does not recommend chatbots for children under the age of five, close supervision for 6 to 12 years and content limits for adolescents. Experts have found that other AI products, such as the character. AI, are not without danger for adolescents either. In general, it is better to keep an attentive eye on how young people use AI.



