Contributor: If social platforms are harmful, don’t just ban kids. Regulate the harms

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c

As major social media companies head to court this year to defend themselves against allegations that their products harmed young people’s mental health, policymakers are looking for decisive answers. The lawsuits, which focus on whether the platforms knowingly designed addictive and psychologically harmful systems for young people, raise long-avoidable questions: Who bears responsibility for online harm? And what exactly should be done about it?

Across the world, a political response has already gained momentum. Faced with enormous public pressure, lawmakers are increasingly turning to bans: banning or severely restricting teens’ access to social media. These proposals are politically attractive. They are simple, signal action, and promise protection without requiring the nuanced, slow, and logistically complex work of regulating businesses worth trillions of dollars.

But brutal bans are not a good response to this situation. As an adolescent psychologist and researcher who studies scalable digital mental health interventions for youth, I believe bans without systemic oversight are worse than ineffective; they constitute a form of political abdication. They are abandoning responsibility, shifting the blame to tech companies, and abandoning the much more difficult task of making online spaces truly safer for the millions of young people who already use them daily and will likely continue to do so – with or without an attempted ban (given the known challenges in enforcing bans).

The current lawsuits do not challenge the existence of social media. They study how the platforms were allowed to operate. The plaintiffs argue that the companies knowingly designed design features that maximize engagement by exploiting young people’s psychological vulnerabilities, while minimizing or obscuring risks. This distinction is important: if the security risks of platforms lie in their design, banning young people from access does nothing to solve the underlying problem.

Decades of research complicate the popular narrative that social media, in itself, is the primary driver of the youth mental health crisis. In large studies, the association between overall time spent on social media and mental health outcomes is often weak or inconsistent. What matters far more than screen time is what young people encounter online, how content is delivered, and how platforms are structured to support or harm users’ well-being.

For many adolescents, especially those who are marginalized, isolated, or lack supportive offline environments, online spaces often serve as a lifeline. LGBTQ+ youth, youth with mental health issues, and those from communities with limited access to care often turn first to the internet when they are struggling. In our lab work, we have shown that digital tools for identity exploration and skill development – ​​and offered to young people free, anonymously and via social media platforms – can alleviate stress and reduce symptoms in vulnerable adolescents, with benefits lasting weeks or even months later.

When brief, self-guided mental health interventions are delivered directly on social media platforms, where youth are already seeking support, they can reduce hopelessness and self-hatred in the short term, increase motivation to stop self-harm, and boost awareness of crisis resources among adolescents reported to be at risk. These are not theoretical advantages; these are results observed in large-scale trials involving thousands of young people.

Blanket bans threaten to cut off these avenues of support without replacing them with something safer or more effective. Adolescents consistently report that major barriers to mental health care include not wanting to involve parents, not knowing where to go, and fearing loss of autonomy. Policies that rely on age restrictions or parental permission exacerbate these barriers, particularly for young people whose families are unsupportive or unsafe. And for digitally savvy teens, bans don’t end online engagement; they simply redirect it. Young people will lie about their age, migrate to less regulated platforms, or retreat to private, harder-to-police spaces where security risks may be even greater.

This does not mean denying that social networks present real dangers. However, these dangers are not accidental; we (adults) designed them. They come from algorithmic recommendation systems, infinite-scrolling designs, opaque personalization, and engagement-maximizing feedback loops that prioritize profit over user well-being. These features are purposefully designed, thoroughly tested, and fiercely defended because they are lucrative.

Responding to this reality with bans on youth access rather than regulating platform design constitutes a profound misalignment of responsibilities. This places the burden of safety on adolescents and families while leaving the systems that generate harm intact.

If we are serious about protecting and promoting young people’s mental health, we need systemic oversight, not quick-fix restrictions.

First, policymakers must tackle algorithmic accountability head on. The biggest risks for young users come from engagement-maximizing recommendation systems designed to capture attention at all costs. Regulation should require transparency about how these systems operate, restrict or prohibit algorithmic feeds that are predatory to minors, and impose safer default settings that restore user agency. This is not about censoring content; it’s about regulating architecture.

Second, we need effective enforcement mechanisms. Voluntary corporate pledges and internal security teams aren’t enough when incentives aren’t aligned. Independent monitoring bodies with real authority – capable of auditing, penalizing and enforcing compliance – are essential. Without them, security will always be subordinate to growth.

Third, we should invest in evidence-based digital mental health supports that meet the needs of young people where they are. The same technologies that can amplify harm can also provide help – fast, inexpensive and at scale. Rather than massively cutting off access to platforms, we should demand and encourage the integration of proven mental health supports into the digital ecosystems that young people already use.

The ongoing litigation against social media companies represents a rare opportunity. Courts and the public are scrutinizing not only what young people are doing online, but also what tech companies have built and why. In response, we have the opportunity to choose between policies that place responsibility on families and young people (bans) and policies that confront structural drivers of harm head-on (regulation and reform).

Teenagers are online and they will stay there. The question is whether we will insist on making online spaces safer or settle for bans that let the real problems persist unchecked.

Jessica L. Schleider is an associate professor of medical social sciences, pediatrics, and psychology at the Feinberg School of Medicine at Northwestern University, where she directs the Laboratory for Evolutionary Mental Health.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button