OpenAI’s Teen Safety Features Will Walk a Thin Line

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c

Openai announced a new The security features of teenagers for chatgpt Tuesday as part of a continuous effort to respond to concerns concerning the way in which minors engage with chatbots. The company builds an age prediction system that identifies if a user is under 18 and transports them to an “adapted to age” system that blocks graphic sexual content. If the system detects that the user plans to commit suicide or self -control, he will contact the user’s parents. In the event of an imminent danger, if the parents of a user are inaccessible, the system can contact the authorities.

In a blog article on the announcement, CEO Sam Altman wrote that the company was trying to balance the freedom, privacy and security of adolescents.

“We realize that these principles are in conflict, and not everyone will agree with the way we resolve this conflict,” wrote Altman. “These are difficult decisions, but after talking about experts, that’s what we think is best and want to be transparent in our intentions.”

While Openai tends to prioritize confidentiality and freedom for adult users, for adolescents, society claims that it puts security first. At the end of September, the company will deploy parental controls so that parents can link their child’s account to their own, allowing them to manage conversations and deactivate features. Parents can also receive notifications when “the system detects that their teenager is in a moment of acute distress”, according to the company’s blog post, and set limits at the time of the day whose children can use the Chatppt.

The movements come from deeply disturbing headlines continue to surface on people who die by suicide or violence against family members after having started long conversations with AI chatbots. The legislators took note and Meta and Openai are under surveillance. Earlier this month, the Federal Trade Commission asked Meta, Openai, Google and other AI companies to put information on the impact of their technologies, according to Bloomberg.

At the same time, Openai is still under an order of the court forcing to preserve discussions on consumers indefinitely – a fact that the company is extremely unhappy, according to sources to whom I spoke. Today’s news is both an important step towards the protection of minors and an informed decision of public relations to strengthen the idea that conversations with chatbots are so personal that consumer confidentiality should only be raped in the most extreme circumstances.

“An avatar sexbot in chatgpt”

From the sources to which I spoke in Openai, the burden of user protection weighs heavily on many researchers. They want to create a fun and engaging user experience, but it can quickly become disastrously sycophantic. It is certain that companies like Openai take measures to protect minors. At the same time, in the absence of federal regulations, nothing forced these companies to do the right thing.

In a recent interview, Tucker Carlson pushed Altman to answer exactly WHO Make these decisions that have an impact on the rest of us. The OPENAI chief underlined the model’s behavior team, which is responsible for setting the model for certain attributes. “The person, I think, you should hold responsible for these calls is me,” added Altman. “As, I am a public face. Finally, as I am the one who can cancel one of these decisions or our board of directors.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button