Parents Soon Can Block Their Kids from Interacting with AI Chatbots on Instagram

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c

Television, social media, junk food: parents have always imposed limits on their children. Now add AI chatbots to the list. Meta announced Friday that starting in 2026, parents will be able to prevent teenagers from interacting with AI chatbots on Instagram. Parents will be able to block all access or block access to specific AI characters.

Meta, owner of Instagram, Facebook and WhatsApp, adds parental controls months later a report came out in August, the company’s AI guidelines allowed chatbots to “engage a child in romantic or sensual conversations.” Another report released earlier this month indicated that 3 in 5 children aged 13 to 15 experience dangerous content or spam messages on Instagram.


Don’t miss any of our unbiased technical content and lab reviews. Add CNET as your preferred Google source.


The company said in a blog post Friday that the new AI chatbot controls align with parents’ concerns about “who (children) interact with, what type of content they see, and whether their time is well spent.”

CNET AI Atlas Badge

Zooey Liao/CNET

“We hope that today’s updates will give parents some peace of mind knowing that their teens will be able to make the most of all the benefits that AI offers, with the appropriate safeguards and oversight in place,” Adam Mosseri, head of Instagram, and Alexandr Wang, head of AI, said in the blog.

How Parents Can Control AI Chatbot Interactions

Instagram chatbot control example

Meta

Teenagers can interact with AI chatbots through Instagram’s direct messages section. Chat can take place with a creator’s AI, a custom AI character, or Meta’s general purpose AI.

Meta said the new controls allow parents to completely disable their teen’s access to one-on-one chats with AI characters or block specific AI characters if they don’t want to completely disable access to AI characters.

Additionally, parents can “get insight into the topics their teens are discussing with AI characters.”

The company did not detail how parents would know what AI topics their children are discussing.

Teenagers can still use Meta’s regular AI assistant “with age-appropriate default protections to keep them safe.”

Expert: controls are “insufficient”

James Steyer, founder and CEO of Common Sense Media, a digital advocacy and research nonprofit, called Meta’s new AI chatbot a “reactive concession” and insufficient.

“Meta’s refusal to treat the safety of our children with the urgency it requires is deeply disappointing but unfortunately not surprising,” Steyer told CNET. “For too long, this company has prioritized the relentless pursuit of engagement over the safety of our children, ignoring warnings from parents, experts and even its own employees.”

Steyer said no one under the age of 18 should use Meta AI chatbots “until their fundamental security flaws are fixed.”

Meta’s representative says the company continues to improve security.

“We have already gathered input from high-level experts who have shaped our initial thinking, and we will continue to work with experts and parents to ensure a thoughtful, privacy-respecting approach,” the representative told CNET.

Other safeguards for AI chatbots

Meta also outlined additional protections regarding AI chatbots and adolescents:

  • AI characters are “designed not to engage in age-inappropriate discussions about self-harm, suicide, or eating disorders.”
  • AI characters can only focus on “age-appropriate topics like education, sports, and hobbies.”
  • Parents can see if their teens are chatting with AI characters.

Earlier this week, Instagram said it would only allow teens to view content “similar to what they would see in a PG-13 movie,” in line with its new guidelines for teen accounts.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button