A jury said Instagram and YouTube are defective — now what?

Is social media not only bad, but illegally bad? Should tech companies pay to make it happen? According to two American juries – and there is no shortage of external comments – the answer to both questions is “yes”.
Earlier this week, two juries – one in New Mexico, the other in Los Angeles – held Meta liable for a total of hundreds of millions of dollars for harming minors. YouTube was also found liable in Los Angeles, and both companies are appealing their losses. In a certain sense, the decisions were surprising. Meta and Google operate voice transmission platforms and are generally protected in various ways by Section 230 and the First Amendment; it is unusual for the suits to clear these obstacles. In another, it seems inevitable. The web of 2026 has become almost synonymous with a few widely hated for-profit platforms, and the damage they have caused is often tangible – but it is still far from certain what this defeat will change and what the collateral damage might be.
If these decisions survive appeal – which is not certain – the direct result would be multimillion-dollar penalties. Depending on the outcome of several other “landmark” cases in Los Angeles, a much broader class settlement could be reached in the future. Even at this early stage, it’s a victory for a legal theory that says social media platforms should be treated like defective products — a strategy designed to circumvent Section 230’s shield, but one that often fails in court. “The California case in particular is the first time that social media has had to face the scrutiny and judgment of a jury for specific personal injuries,” said attorney Carrie Goldberg, who has launched major liability lawsuits against social media, including an unsuccessful case against Grindr. The edge. “This is the dawn of a new era.”
“This is the dawn of a new era.”
For many activists, the overall goal is to make clear that lawsuits will continue to pile up if companies don’t change their business practices. What practices? In New Mexico, a jury was swayed by arguments that Meta made misleading statements about the security of its platforms. In Los Angeles, plaintiffs successfully asserted that Instagram and YouTube were designed to facilitate social media addiction that harmed a teenage user. Meta and Google (and other nervous companies) could presumably change specific features or be more careful in their public statements and disclosures. But each case depends on a very specific set of circumstances, and there is no single answer as to what needs to change.
Eric Goldman, a legal blogger and Section 230 expert, sees clear legal danger for social media services. “These decisions indicate that juries are prepared to impose major liability on social media providers based on allegations of social media addiction,” Goldman wrote after the decision. In an email to The edgehe stressed that the problem was not limited to juries. “The judges are certainly aware of the controversies around social media,” Goldman said. In the Los Angeles case and other upcoming trials, “judges did not give social media defendants much benefit of the doubt, which is why the plaintiffs’ new cases were able to go to trial in the first place.” It’s a situation, he says, that “seems different from ten years ago.”
Goldman pointed out that New York and California have also passed laws banning “addictive” social media feeds for teens — so even if an appeals court overturns recent rulings, it won’t necessarily turn back the clock.
The best result of all this was presented by people like Julie Angwin, who wrote in The New York Times that companies should be pushed to change “toxic” features like infinite scrolling, beauty filters that encourage body dysmorphia, and algorithms that prioritize “shocking and crude” content. The worst case scenario is in line with an article by Mike Masnick on Technical dirtwho argued that the decisions would be a disaster for small social networks that could be sued for letting users post and view First Amendment-protected speech under a vague harm standard. He noted that the New Mexico case rested in part on the argument that Meta harmed children by providing end-to-end encryption in private messaging, thereby creating an incentive to abandon a feature that protects users’ privacy — and indeed, Meta discontinued end-to-end encryption on Instagram earlier this month.
“The judges did not give the defendants the benefit of the doubt on social media. »
Blake Reid, a professor at Colorado Law, is more circumspect. “It’s difficult right now to predict what’s going to happen,” Reid said. The edge in an interview. Regarding Bluesky, he noted that companies will likely look for “cold and calculated” ways to avoid legal liability with as little disruption as possible, without fundamentally rethinking their business models. “There are obviously harms here and it’s very important that the tort system has taken those harms into account” in recent cases, he said. The edge. “It’s just that what happens after them is less clear to me.”
Although Reid sees these decisions as legal risks for smaller platforms with fewer resources, he is not convinced they are more serious than the challenges new entrants already face in a hyper-consolidated online landscape built on massive amounts of collected data. “There are things that make it difficult to do something really new in this space that are driven by the type of market and the politics around it,” he said.
Reid, Goldman and Masnick all warn that there is a strong chance the fallout will harm marginalized people who use social media to connect. “There will be even greater pressures to restrict or prohibit children from accessing social media,” Goldman said. The edge. “This harms many subpopulations of minors, ranging from LGBTQ adolescents who will be isolated from communities that can help them navigate their identities to autistic minors who can express themselves better online than in face-to-face conversations.”
If platforms like Instagram are inherently harmful and directly comparable to gambling or cigarettes, comparisons frequently made by critics, their launch would not be a great loss. But even research suggesting that social media can be harmful to teens has linked moderate use to better well-being. Conversely, harmful online content, like stalking and eating disorder communities, still thrived before hyper-optimized, recommendation-driven modern social media; Tinkering with specific algorithmic formulas could have a positive impact, but it may not provide a deep or lasting solution. The point of punishing Meta is obvious – what it will mean for everyone is much less clear.



