Bipartisan GUARD Act would ban minors from AI chatbots under new bill

NEWYou can now listen to Fox News articles!
A new bipartisan bill introduced by Sens. Josh Hawley, R-Mo., and Richard Blumenthal, D-Conn., would ban minors (under 18) from interacting with certain AI chatbots. It exploits growing concern about children using “AI companions” and the risks these systems can pose.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive offers straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM bulletin.
What’s wrong with the GUARD bill?
Here are some of the key features of the custody bill:
- AI companies would be required to check user age with “reasonable age verification measures” (e.g., government ID) rather than simply asking for a date of birth.
- If a user is found to be under 18, a company must prohibit them from accessing an “AI companion”.
- The bill also requires that chatbots clearly reveal that they are not human and do not hold professional qualifications (therapeutic, medical, legal) in every conversation.
- This creates new criminal and civil sanctions for companies that knowingly provide chatbots to minors who solicit or facilitate sexual content, acts of self-harm or violence.

Bipartisan lawmakers, including Senators Josh Hawley and Richard Blumenthal, introduced the GUARD Act to protect minors from unregulated AI chatbots. (Kurt “CyberGuy” Knutsson)
The motivation: Lawmakers cite testimony from parents, child welfare experts and growing lawsuits alleging that some chatbots manipulated minors, encouraged self-harm or worse. The basic framework of the GUARD Act is clear, but the details reveal how far its reach could be extended to both tech companies and families.
META AI DOCS EXPOSED, ALLOWING CHATBOTS TO FLIRT WITH CHILDREN
Why is this so important?
This bill is more than just another piece of technology regulation. This question lies at the center of a growing debate about the scope of artificial intelligence in children’s lives.
Rapid growth of AI + child safety concerns
AI chatbots are no longer toys. Many children use them. Hawley cited more than 70 percent of American children consuming these products. These chatbots can provide human responses, emotional mimicry, and sometimes invite ongoing conversations. For miners, these interactions can blur the lines between machine and human, and they may seek advice or emotional connection from an algorithm rather than a real person.
Legal, ethical and technological issues
If this bill passes, it could reshape how the AI industry handles minors, age verification, disclosures and accountability. This shows that Congress is willing to move away from voluntary self-regulation and adopt strong safeguards when children are involved. The proposal could also open the door to similar laws in other high-risk areas, such as mental health robots and teaching assistants. Overall, this marks a shift from waiting to see how AI develops to taking immediate action to protect young users.

Parents across the country are calling for stronger safeguards as more than 70% of children use AI chatbots that can mimic empathy and emotional support. (Kurt “CyberGuy” Knutsson)
Industry pushback and innovation concerns
Some tech companies say such regulation could stifle innovation, limit beneficial uses of conversational AI (education, mental health support for older teens), or impose heavy compliance burdens. This tension between security and innovation is at the heart of the debate.
What the GUARD Act requires of AI companies
If passed, the GUARD Act would impose strict federal standards on how AI companies design, vet and manage their chatbots, especially when minors are involved. The bill sets out several key obligations aimed at protecting children and holding businesses accountable for harmful interactions.
- The first major requirement concerns age verification. Businesses must use reliable methods such as a government-issued ID or other proven tools to confirm that a user is at least 18 years old. It is no longer enough to simply ask for a date of birth.
- The second rule implies clear information. Every chatbot must tell users at the start of each conversation, and at regular intervals, that it is an artificial intelligence system and not a human being. The chatbot must also clarify that it does not hold professional titles such as medical, legal or therapeutic licenses.
- Another provision establishes a ban on access to minors. If a user is verified as under 18, the company must block access to any “AI Companion” features that simulate friendship, therapy, or emotional communication.
- The bill also introduces civil and criminal sanctions for companies that violate these rules. Any chatbot that encourages or engages in sexually explicit conversations with minors, encourages self-harm, or incites violence could result in significant fines or legal consequences.
- Finally, the GUARD law defines a AI Companion as a system designed to foster interpersonal or emotional interaction with users, such as friendship or therapeutic dialogue. This definition makes it clear that the law targets chatbots capable of establishing human-like connections, not limited-purpose assistants.

The proposed GUARD Act would require chatbots to verify users’ ages, reveal that they are not human, and block users under 18 from AI-associated features. (Kurt “CyberGuy” Knutsson)
OHIO LEGISLATOR PROPOSES COMPLETE BAN ON MARRIAGE WITH AI SYSTEMS AND JURATORIAL PERSONALITY AGREEMENT
How to stay safe while waiting
Technology often evolves faster than laws, meaning families, schools and caregivers must take the lead now in protecting young users. These steps can help create safer online habits as lawmakers debate how to regulate AI chatbots.
1) Know which robots your children use
Start by finding out which chatbots your kids talk to and what these bots are designed to do. Some are for entertainment or education, while others focus on emotional support or companionship. Understanding the purpose of each robot helps you spot when a tool transitions from harmless fun to something more personal or manipulative.
2) Establish clear rules about interaction
Even if a chatbot is labeled as safe, decide together when and how it can be used. Encourage open communication by asking your child to show you their chats and explain what they like about them. Framing this as curiosity, not control, builds trust and keeps the conversation flowing.
3) Use parental controls and age filters
Take advantage of the built-in security features as much as possible. Turn on parental controls, enable child-friendly modes, and block apps that allow private or unsupervised chats. Small changes to settings can make a big difference in reducing exposure to harmful or suggestive content.
4) Teach children that robots are not humans
Remind children that even the most advanced chatbot is still software. He can imitate empathy, but does not understand or care in the human sense. Help them recognize that advice about mental health, relationships, or safety should always come from trusted adults, not an algorithm.
5) Watch for warning signs
Stay alert for changes in behavior that could signal a problem. If a child withdraws, spends long hours talking privately with a robot, or repeats harmful ideas, intervene early. Talk openly about what is happening and, if necessary, seek professional help.
6) Stay informed of changing laws
Regulations such as the GUARD Act and new state measures, including California’s SB 243, are still taking shape. Stay on top of updates so you know what protections exist and what questions to ask app developers or schools. Awareness is the first line of defense in a rapidly changing digital world.
Take my quiz: How safe is your online security?
Do you think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get personalized analysis of what you’re doing right and what needs improvement. Take my quiz here: Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Kurt’s Key Takeaways
The GUARD Act represents a bold step toward regulating the intersection of minors and AI chatbots. This reflects growing concern that a company not moderated by AI could harm vulnerable users, particularly children. Of course, regulation alone will not solve all problems; industry practices, platform design, parental involvement and education are all important. But this bill signals that the “build it and see what happens” era for conversational AI could end when kids get involved. As technology continues to evolve, our laws and personal practices must evolve as well. For now, staying informed, setting boundaries, and treating chatbot interactions with the same scrutiny we treat humans can make a real difference.
If a law like the GUARD Act becomes reality, should we expect similar regulation for all emotional AI tools aimed at children (tutors, virtual friends, games) or are chatbots fundamentally different? Let us know by writing to us at Cyberguy.com.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive offers straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM bulletin.
Copyright 2025 CyberGuy.com. All rights reserved.




