Instagram will notify parents if teens ‘repeatedly’ search terms related to suicide

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c

Instagram announced Thursday that it will begin alerting parents if their children repeatedly search for terms clearly associated with suicide or self-harm. Alerts will only be sent to parents enrolled in Instagram’s parental monitoring program.

Instagram says it already blocks this type of content from appearing in search results for teen accounts and directs people to helplines.

The announcement comes as Meta is in the midst of two child harm lawsuits. An ongoing lawsuit in Los Angeles questions whether Meta’s platforms are deliberately addictive and harm minors. Another, in New Mexico, is investigating whether Meta failed to protect children from sexual exploitation on its platforms. Thousands of families — as well as school districts and government entities — have sued Meta and other social media companies, claiming they deliberately design their platforms to be addictive and fail to protect children from content that can lead to depression, eating disorders and suicide.

Meta executives, including CEO Mark Zuckerberg, have disputed that the platforms are addictive. Under questioning by the plaintiff’s lawyer in Los Angeles, Zuckerberg said he remained in agreement with an earlier statement that the existing body of scientific work had not proven that social media harmed mental health.

Alerts will be sent via email, SMS or WhatsApp, based on the parent’s available contact details, as well as notification via the parent’s Instagram account.

“Our goal is to allow parents to intervene if their teen’s searches suggest they might need help. We also want to avoid sending these notifications unnecessarily, which, if we overdo it, could make the notifications less useful overall,” Meta said in a blog post.

Meta said it is also working on similar notifications for parents about their children’s interactions with artificial intelligence.

“These will notify parents if a teen attempts to engage in certain types of conversations related to suicide or self-harm with our AI,” Meta said. “This is important work and we will have more to share in the coming months.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button