State lawmakers grapple with child safety concerns over AI chatbots

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c

When her 14 -year -old committed suicide after interacting with artificial intelligence chatbots, Megan Garcia transformed his sorrow into action.

Last year, Florida’s mother continued the character. Ai, a platform where people can create and interact with digital characters who imitate real and fictitious people.

Garcia allegedly allegedly alleged the platform’s chatbots had harmed the mental health of his son Sewell Setzer III and the Menlo Park company in California, failed to inform him or to offer help when he expressed suicidal thoughts to these virtual characters.

Now Garcia supports state legislation aims to protect young people from “companions” chatbots that she says “are designed to hire vulnerable users in inappropriate romantic and sexual conversations” and “encourage self -harm”.

“Over time, we will need a full regulatory framework to deal with all the misdeeds, but at the moment, I am grateful that California is at the forefront of this field,” Garcia said at a press conference on Tuesday before a Sacramento hearing to review the bill.

Prevention of suicide and crisis consulting resources

If you or someone you know with suicidal thoughts, ask for help from a professional and call 9-8-8. The three-digit three-digit hotlines 988 from the United States, the Hotline 988, will connect the appellants to advisers trained in mental health. “Home” text at 741741 in the United States and Canada to reach the line of crisis text.

While companies move quickly to advance chatbots, parents, legislators and children’s defense groups fear that it is not enough guarantees in place to protect young people from the potential dangers of technology.

To solve the problem, state legislators have presented a bill that would force companion chatbot platform operators to remind users at least every three hours that virtual characters are not human. Platforms should also take other measures such as the implementation of a protocol to combat suicidal ideas, suicide or self -managed by users. This includes the demonstration of user suicide prevention resources.

As part of Bill 243 of the Senate, the operator of these platforms would also bring back the number of times that a Chatbot Companion spoke of suicide ideas or actions with a user, as well as other requirements.

The legislation, which has cleaned the Senate judicial committee, is only a way in which state legislators are trying to combat the potential risks posed by artificial intelligence while chatbots increase in popularity with young people. More than 20 million people use the character. UAE every month and users have created millions of chatbots.

Legislators say that the bill could become a national model for AI protections and that some of the supporters of the bill include the defense group for Common Sense Media and the American Academy of Pediatrics, California.

“Technological innovation is crucial, but our children cannot be used as a guinea pig to test products safety. The issues are raised, “said senator Steve Padilla (D-Chula Vista), one of the legislators who presented the bill, during the event assisted by Garcia.

But the technology industry and the groups of companies, in particular Technet and the California Chamber of Commerce, oppose the legislation, saying to the legislators that it would impose “unnecessary and heavy requirements on the models of AI for general use”. Electronic Frontier Foundation, a non -profit digital rights group based in San Francisco, says that the legislation raises first amendments.

“The government probably has an imperative interest in preventing suicide. But this regulation is not closely adapted or precise,” EFFA wrote to the legislators.

The character. Ai also surfaced the concerns of the 1st amendment concerning the Garcia trial. His lawyers asked a federal court in January to reject the case, declaring that a conclusion in favor of parents would violate the constitutional law of users to freedom of expression.

Chelsea Harrison, spokesperson for Character.ai, said in an e-mail that the company takes seriously the safety of users and its objective is to provide “a committing and safe space”.

“We are always working to achieve this balance, as are many companies using AI throughout industry. We are delighted to work with regulators and legislators when they start to consider the legislation for this emerging space,” she said in a press release.

It has cited new security features, including a tool that allows parents to see how long their teenagers spend on the platform. The company has also cited its efforts to moderate potentially harmful content and direct certain users to the national life line of suicide and the crisis.

Social media societies, including Snap and Facebook’s mother company, Meta have also published Chatbots AI in their applications to compete with the Openai Chatppt, which people use to generate text and images. Although some users have used Chatgpt to get advice or complete work, some have also turned to these chatbots to play the role of a boyfriend or a virtual friend.

Legislators are also struggling with the way to define the “companion chatbot”. Some applications such as Fillika and Kindroid market their services as IA companions or digital friends. The invoice does not apply to chatbots designed for customer service.

Padilla said at the press conference that legislation focuses on the design of “intrinsically dangerous” products and is intended to protect minors.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button