Meta censored a post on lesbian relationships, proving its priorities are all wrong

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c

In April, Meta slowly reversed after deleting an Instagram post honoring older lesbian relationships in Brazil. The deleted post was not sexual in nature and did not contain any material harmful to minors. The post in question documented a snapshot of a moment in history when lesbians were forced to hide their relationships as “roommates” or “girlfriends” and their love was erased from public records. Nonetheless, Meta removed the content.

Meta cited its rules on hate speech. The Supervisory Board later recognized what should have been obvious from the start: Brazil’s case was an example of excessive repression against a marginalized community, driven by automated systems incapable of reading the context, the recovered language, or even the full message itself. The content was only restored after outside intervention and advocacy from the LGBTQ+ community.

Mashable 101 Fan Favorite: Vote for your favorite designer Today!

This case is now being treated as a content moderation error, but policymakers must recognize that it serves as a clear warning about what happens when lawmakers push platforms to police content instead of correcting its design. Across the country, states are racing to “protect children online” by restricting access to social media or pressuring companies to remove vaguely defined “harmful” content. But what happened in Brazil shows the human cost of this approach.

When platforms are incentivized to remove speech quickly and at scale, they don’t become better judges of nuance. Social media is becoming a brutal tool, and the first people affected are those whose stories require human context and radical empathy to understand.

If lawmakers truly want to protect children, they should stop asking platforms to decide which stories are acceptable and start regulating the fundamental design choices that cause harm in the first place, like endless scrolling, engagement-based recommendations, and surveillance-based feeds.

SEE ALSO:

I had a Grindr sugar daddy for a day. Then he tried to get a refund.

Here’s why this distinction matters, especially for LGBTQ+ children and other marginalized communities, like neurodivergent children. LGBTQ+ youth are far more likely than their peers to rely on online spaces to find communityinformation and support, often because these things are not available or safe at home or at school. But they are also significantly more likely finding themselves in dangerous online interactions: harassment, grooming, doxxing or being pushed into high-risk spaces they did not seek out.

In Australia, after a ban on social media for anyone under 16 was signed into law, disability rights advocates noted that Young people with autism have been cut off from some of the only support and peer networks available to them.

Recommender systems don’t understand vulnerability, but they understand engagement. When a queer kid is looking for community, platforms often respond by aggressively amplifying whatever makes them click. This usually means increasingly sexualized content, adult strangers, extremist rhetoric, or predatory accounts that know exactly how to exploit isolation.

Infinite scrolling makes it much harder for teens to disengage, according to the Electronic Privacy Information Centereven more so for people belonging to vulnerable communities. Algorithmic suggestions for “friends” or “accounts” erase the liminal boundaries between adolescents and adults. Low defaults make it difficult to block, mute, or disappear.

Young people, not just LGBTQ+ young people, are exposed to harm online because platforms are designed to attract attention, not protect users. Parents are right to be concerned and to advocate for change. But content-based framing misses the real problem.

The biggest risks kids face online come not from a single bad post that escapes moderation, but from automated systems that serve content to kids they didn’t ask for, connect them to people they don’t know, and keep them scrolling long after the warning signs appear.

Policymakers, both at the state and federal level, must design regulations that directly address these risks. Age-appropriate design codes don’t tell platforms what speech to allow, but they can tell them how to behave. Design codes require safer defaults, such as limits on behavioral profiling, stronger blocking tools, reduced amplification of unsolicited recommendations, and guardrails that slow virality and compulsive use.

The public should advocate for product refinement, rather than violation of First and Fourth Amendment rights. Design codes reduce the risk of a curious or lonely child being algorithmically steered into danger, as I was, seeking community and pushed toward risk by systems that didn’t care who I was.

Age-appropriate design codes offer a solution to this problem. By regulating how platforms are built rather than what people are allowed to say, design code laws reduce harm without turning companies into cultural censors. They don’t need platforms to interpret reclaimed slurs, queer history, or political discourse. Instead, companies should be forced to end their reliance on engineering and risk.

We don’t need more content or platform bans. We need fewer harmful systems. If we are serious about protecting children online, especially those who are already most at risk, this case reminds us exactly where to start.

This article reflects the opinion of the author.

Lennon Torres is a former Dance Moms artist is now fighting for the safety of young people online. A trans activist and University of Southern California alumna, she uses her fluency in pop culture and lived experience to fuel her work at the Heat Initiative.taking on tech giants and demanding platforms to protect and empower the next generation.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button