The Appleton Times

Truth. Honesty. Innovation.

Technology

A jury says Meta and Google hurt a kid. What now?

By Jessica Williams

3 days ago

Share:
A jury says Meta and Google hurt a kid. What now?

Juries in California and New Mexico have ruled against Meta and Google in landmark social media addiction lawsuits, finding the companies liable for negligent platform designs that harmed young users' mental health. The verdicts, which the tech giants plan to appeal, could open the door to more litigation while sparking debates over Section 230 protections and free speech.

In a pair of landmark verdicts that could reshape the legal landscape for social media giants, juries in California and New Mexico have found Meta and Google liable for contributing to the mental health struggles of young users through negligent platform design. The rulings, handed down in recent weeks, mark the first major courtroom losses for the tech industry in a wave of lawsuits alleging that features like infinite scrolling and algorithmic recommendations foster addiction and exacerbate issues such as anxiety, depression, and body dysmorphia. Meta, which owns Facebook and Instagram, and Google, parent of YouTube, have vowed to appeal both decisions, signaling a protracted battle ahead.

The California case centered on a 20-year-old plaintiff identified only as Kaley, who argued that her heavy use of Instagram and YouTube led to severe mental health challenges. During the trial in Los Angeles Superior Court, jurors heard testimony from high-profile executives, including Meta CEO Mark Zuckerberg and Instagram head Adam Mosseri, as well as former employees who served as whistleblowers. Internal documents from the companies were presented as evidence, revealing awareness of potential harms from certain features. "It was trying to get around a problem that has been going on with tech for a long time: can you separate design from content on these platforms?" said Lauren Feiner, a senior policy reporter for The Verge who covered the proceedings from the courtroom.

In New Mexico, the lawsuit targeted Meta alone, with similar allegations of platform flaws harming teenage users. The cases, described by industry observers as "bellwether trials," are seen as test cases that could pave the way for dozens more suits against social media companies. According to Casey Newton, founder and editor of the Platformer newsletter and a former Verge editor, these verdicts crack open a vulnerability in the tech sector's long-standing legal defenses. "For the past 20 years, companies have been able to use Section 230 as a shield," Newton explained. "Whenever there is any remotely content-related challenge to any of these platforms in court, they just get dismissed out of hand."

Section 230 of the Communications Decency Act, enacted in 1996, shields platforms from liability for user-generated content, allowing companies like Meta and Google to argue that harms stem from speech rather than their products. But in these trials, plaintiffs successfully framed features such as autoplay videos, push notifications, and infinite scroll as defective product designs, akin to cars without seatbelts. Jurors in the California case awarded Kaley damages, though the exact amount was not immediately disclosed, while the New Mexico verdict similarly held Meta accountable for negligence.

The trials drew on precedents like the 2017 Lemmon v. Snap case, where Snapchat was found partially liable for a fatal car crash linked to its speed filter, which encouraged risky driving to capture high-speed selfies. In that instance, the court ruled that the filter itself created dangerous incentives, bypassing Section 230 protections. "The reason that that was important was that all of a sudden the 230 shield wasn’t absolute," Newton noted. Building on this, attorneys in the recent cases argued that Meta and Google's algorithms and engagement tools similarly incentivize compulsive use, particularly among vulnerable teens.

Feiner, who witnessed Zuckerberg's testimony, described a tense atmosphere in the Los Angeles courtroom. The Meta CEO addressed concerns over beauty filters, recounting how one employee had pushed back against their inclusion, citing potential impacts on his daughters. "When I was watching Mark Zuckerberg on the stand, he was talking about a certain beauty filter that they had and how one of his own employees pushed back on including it," Feiner recounted. Whistleblowers and internal memos painted a picture of companies aware of these risks yet prioritizing user growth and engagement metrics.

Notably, TikTok and Snap settled their involvement in the California suit before the trial concluded, a move that observers interpreted as a sign of the case's strength. "TikTok and Snap settled before the trial. That was the moment when I said, ‘Okay, they must be really, really scared,’" Newton said. Meta and Google, however, pressed forward, only to face jury decisions against them. The verdicts come amid a broader surge in litigation, with attorneys anticipating a flood of similar claims now that Section 230's applicability to design choices has been questioned.

Experts caution that while the rulings represent a shift, they do not spell the end of Section 230. The law's architects have expressed openness to reforms, particularly regarding emerging issues like AI-generated content, but any changes must navigate First Amendment constraints. "The First Amendment obviously prohibits the government from regulating what speech these companies promote and moderate," noted Nilay Patel, host of The Verge's Decoder podcast, during a discussion with Newton and Feiner. Private lawsuits, however, offer a workaround, focusing on product liability rather than content moderation.

Critics of the verdicts, including some tech advocates, argue that holding platforms accountable for algorithmic curation could chill free speech by forcing over-moderation. They contend that features like recommendations are essential to the platforms' functionality and that harms often arise from user interactions, not inherent defects. Newton acknowledged this tension: "An algorithmic feed with no content in it simply isn’t a compelling product, let alone a negligently defective one that causes harm." Yet, he added, the near-universal negative experiences with social media—such as repeated app deletions followed by reinstalls—make juries receptive to plaintiffs' arguments.

The cases echo comparisons to the tobacco industry, where companies were eventually held liable for addictive products despite claims of user choice. However, Feiner highlighted key differences: "One thing that’s a big difference between this moment and that for big tobacco is that saying that there’s no safe cigarette. There are a lot of studies that show that’s not really the same case for social media, that some level of social media use actually has a positive or at least neutral effect on people." She emphasized that the core issue is overuse and compulsive engagement, which can sever real-world connections even as platforms foster virtual ones.

Broader context includes ongoing regulatory efforts, such as state laws imposing age limits or parental controls, though experts like Newton and Feiner question their efficacy. "Why nuclear options like age limits and repealing Section 230 won’t make social media safer," Patel framed the discussion, pointing to the complexity of balancing innovation, speech, and safety. Calls to repeal Section 230 entirely have gained traction in political circles, but Feiner noted they may be opportunistic rather than directly tied to these verdicts.

As appeals loom, the tech industry braces for uncertainty. Meta and Google have not commented extensively beyond their intent to fight the rulings, but internal fallout could include redesigns to mitigate liability. For users like Kaley, whose story galvanized the California jury, the wins offer validation. "Everybody knows someone who has a huge problem with Instagram," Newton observed. "This person is probably in your immediate family." With millions affected—even if statistically a small percentage—these cases underscore a societal reckoning with social media's role in daily life.

Looking ahead, the verdicts could influence federal legislation and international regulations, prompting platforms to invest more in mental health safeguards. Yet, as Patel reflected, Section 230 is now three decades old, designed for an internet era that no longer exists. "It’s unclear whether the world it was designed to help create ever came into existence," he said. For now, the lawsuits signal that Big Tech's era of near-total immunity may be waning, forcing a reevaluation of how digital products impact young minds.

The Appleton Times will continue to monitor developments in these cases and their potential ripple effects across the tech sector.

Share: