Instagram to alert parents if their teens search for suicide or self-harm terms

Instagram, a social network popular with young people, said Thursday it would alert parents if their teens repeatedly search for terms related to suicide or self-harm.
“Our goal is to empower parents to step in if their teen’s research suggests they might need support,” the company said in a blog post.
Parents will receive a notification by SMS, email or WhatsApp. They will also have the opportunity to view resources to help them have sensitive conversations with their teenager.
Suicide Prevention and Crisis Counseling Resources
If you or someone you know is struggling with suicidal thoughts, seek professional help and call 9-8-8. The country’s first three-digit mental health helpline, 988, will connect callers with qualified mental health counselors. Text “HOME” to 741741 in the United States and Canada to reach the crisis text line.
The move is the latest example of how tech companies are responding to concerns from parents, politicians and advocacy groups that they are not doing enough to protect young people from harmful content.
A landmark trial to determine whether tech companies like Instagram and YouTube can be held liable for allegedly promoting a harmful product and addicting users to their platforms is taking place in Los Angeles.
The trial included testimony from Instagram boss Adam Mosseri, who told the court the company was trying to be as “safe as possible, but also censor as little as possible.”
Security concerns have intensified as teenagers, some of whom have committed suicide, turn to AI chatbots to share some of their darkest thoughts.
Instagram has an AI assistant in its search bar. Meta, which owns Instagram, creates similar alerts if teens try to have certain conversations about suicide and self-harm with its AI assistant.
Meta has rules prohibiting the posting of content that encourages suicide or self-harm, but allows people to discuss these topics. The parent company also took action against millions of pieces of content about suicide, self-harm and eating disorders, according to Meta transparency reports.
Some parents and teens, however, have alleged in lawsuits that young people saw self-harming content on Instagram.
About 63% of U.S. teens ages 13 to 17 use Instagram, according to a Pew Research Center survey released in December. More than half of U.S. teens also use chatbots to search for information, according to a separate survey released this week.
Instagram, which has more than 3 billion monthly active users, said most teens don’t search for suicidal or self-harm content on Instagram. It blocks searches and directs people to suicide prevention resources. Instagram said alerts are part of its teen accounts, which include limits on who can send messages, deadline reminders and other features.
Parents who use these tools to keep tabs on their teens will start receiving alerts in the US, UK, Australia and Canada next week. They will then be rolled out to other regions later this year.
Social media platforms have taken other steps to improve security. This month, Meta, TikTok and Snap agreed to be evaluated on their teen safety efforts as part of a new program from the Mental Health Coalition.




