ChatGPT adds protections for people addicted to AI chatbots

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c

Chatgpt obtains an upgrade of health, this time for users themselves.

In a new blog post before the GPT-5 announcement of the company, Openai revealed that he would refresh his generative AI chatbot with new features designed to promote healthier and more stable relationships between the user and the Bot. Users who have spent prolonged periods in a single conversation, for example, will now be invited to disconnect with a soft boost. The company also doubles the fixes to the problem of the sycophance of the bot and builds its models to recognize mental and emotional distress.

See also:

A bill of Illinois prohibiting AI therapy has been promulgated

Chatgpt will answer more “high challenges”, explains the company, guiding users through meticulous decision -making, advantages and disadvantages and responses to provide responses to potential requests that change their life. The recently announced study mode of study of this mirror for Chatgpt, which bursts the direct and long responses of the AI assistant in favor of guided Socratic lessons intended to encourage a greater critical thinking.

Mashable lighting speed

“We do not always get things. Earlier this year, an update made the model too pleasant, sometimes saying what sounded well instead of what was really useful. We made it go back, have changed the way we use the comments and improve the way we measure the real long -term utility, not only if you liked the answer in the moment,” wrote Openai in the announcement. “We also know that AI can feel more reactive and personal than previous technologies, especially for vulnerable people with mental or emotional distress.”

In general, Openai has updated its models in response to the affirmations that its generative AI products, in particular the ChatPPT, exacerbate unhealthy social relations and the worsening of mental illnesses, especially in adolescents. Earlier this year, reports have surfaced that many users were establishing delusional relations with the AI assistant, aggravating existing psychiatric disorders, including paranoia and derealization. The legislators, in response, have moved their objective to more intensely regulate the use of the chatbot, as well as their advertisement as emotional partners or replacements for therapy.

Openai recognized this criticism, recognizing that its previous 4o model “failed” by addressing the behavior of users. The company hopes that these new features and system prompts could intensify work in its previous versions.

“Our goal is not to hold your attention, but to help you use it well,” writes the company. “We are trying a test: if someone we love turned to the chatpt to support, we would feel reassured? Going to an unequivocal” yes “is our work.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button