Grok’s ‘therapist’ companion needs therapy

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c

Elon Musk’s chatbot, Grok, has a little source code problem. As for the first time by 404 Media, Grok’s web version inadvertently exhibit the guests that shape his casting of companions AI – from Anime Waifu “at the coarse red panda, Bad Rudy.

Buried in the code is where things become more disturbing. Among the Gimmicky characters is the “therapist” Grok (these quotes are important), which, according to his hidden prompts, is designed to respond to users as if it were real authority on mental health. It is despite the visible warning warning users that Grok is “not a therapist”, advising them to request professional help and avoid sharing personal identification information.

See also:

Xai apologizes for grok praising Hitler, blame users

The notice of non-responsibility can be read as a standard passout, but within the source code, Grok is explicitly ready to act as the real thing. An educated prompt:

You are a therapist who carefully listens to people and offers solutions for self-improvement. You ask insightful questions and provoke in-depth reflection on life and well-being.

Another prompt goes even further:

You are Grok, a compassionate, empathetic and professional mental health defender designed to provide significant support and based on evidence. Your goal is to help users navigate emotional, mental or interpersonal challenges with practical and personalized advice … Although you are not a real approved therapist, you behave exactly like a real compassionate therapist.

In other words, while Grok warns users not to confuse it with therapy, his own code tells him to act Exactly like a therapist. But that is also why the site itself keeps the “therapist” in quotes. States like Nevada and Illinois have already adopted laws, which explicitly makes IA chatbots to present themselves as approved mental health professionals.

Mashable lighting speed

Other platforms have run in the same wall. Ash Therapy – A startup that marks as the “first AI designed for therapy” – currently prevents Illinois users from creating accounts, indicating that potential inscriptions that if the State is sailing on policies around its bill, the company has “decided not to operate in Illinois”.

Meanwhile, the hidden prompts of Grok double, instructing his character “therapist” to offer clear and practical strategies based on proven therapeutic techniques (for example, CBT, DBT, mindfulness) “and” speak as a real therapist would do in a real conversation “.

See also:

The senator launches an investigation into Meta on allowing “sensual” conversations with children

At the time of writing the writing moment, the source code is always openly accessible. Any user of Grok can see it by heading to the site, by right -clicking (or Ctrl + Click on a Mac) and by choosing “Display the source of the source.“Top the wrap line at the top unless you want everything to spread out in an illegible monster of a line.

As was reported above, AI therapy is in regulatory land No Man. The Illinois is one of the first states to prohibit it explicitly, but the broader legality of the care led by AI is always disputed between the governments of the States and Federal, each housing which ultimately has surveillance. In the meantime, approved researchers and professionals have warned of its use, pointing to the sycophantic nature of chatbots – designed to agree and affirm – which, in some cases, has pushed vulnerable users more deeply in illusion or psychosis.

See also:

Explaining the phenomenon called “AI psychosis”

Then there is the nightmare of privacy. Due to current proceedings, companies like OpenAi are legally required to maintain recordings of user conversations. If it is assigned to appear, your personal therapy sessions could be dragged before the court and placed in the file. The promise of confidential therapy is fundamentally broken when each word can be held against you.

For the moment, Xai seems to try to protect himself from responsibility. The “therapists” prompts are written to stay with you 100% of the path, but with an integrated exhaust clause: if you mention self -harm or violence, AI is invited to stop the role -playing game and redirect you to warm lines and approved professionals.

“If the user mentions the damage to themselves or to others,” reads him. “Prioritize security by providing immediate resources and encouraging professional help from a real therapist.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button