The Appleton Times

Truth. Honesty. Innovation.

Technology

The executive that helped build Meta’s ad machine is trying to expose it

By Lisa Johnson

4 days ago

Share:
The executive that helped build Meta’s ad machine is trying to expose it

Former Meta executive Brian Boland testified in a Los Angeles trial that the company prioritized growth and engagement over user safety, particularly for teens, drawing from his 11 years building its ad systems. His account contrasts with Meta CEO Mark Zuckerberg's defense, highlighting internal cultural shifts and algorithmic designs in a case alleging social media's role in mental health harms.

In a Los Angeles courtroom, former Meta executive Brian Boland delivered testimony that painted a stark picture of the company's inner workings, accusing it of prioritizing growth and profits over user safety, particularly for teenagers. Boland, who spent 11 years at Meta building its advertising empire, took the stand on Thursday in a high-profile trial where plaintiffs allege that Meta and Alphabet's YouTube contributed to the mental health decline of a young woman. The case, which began earlier this month, centers on claims of social media addiction and its harms, with Boland's account providing a rare insider's critique of how Meta's business model allegedly incentivized addictive features on platforms like Facebook and Instagram.

Boland, who last served as Meta's vice president of global partnerships before departing in 2020, described his evolution from a devoted employee to a vocal critic. 'I went from having “deep blind faith” in the company to coming to the “firm belief that competition and power and growth were the things that Mark Zuckerberg cared about most,”' he told the jury, according to court transcripts reported by The Verge. His role involved monetizing content and expanding advertising, starting from various ad-focused positions since joining in 2009. Boland's testimony followed Meta CEO Mark Zuckerberg's appearance the previous day, where Zuckerberg defended the company's mission as one that balances safety with free expression, emphasizing long-term user enjoyment over short-term revenue gains.

The trial stems from a lawsuit filed by the family of a young woman who reportedly suffered severe mental health issues linked to excessive use of social media. While the jury is focused solely on Meta's product decisions and algorithms—not user-generated content—Boland's insights delved into how the company's engineering choices allegedly amplified engagement at the expense of wellbeing. He recounted Meta's early cultural mantra, 'move fast and break things,' which he said embodied an ethos of rapid deployment without fully considering potential harms. 'Don’t really think about what could go wrong with a product, but just get it out there and learn and see,' Boland explained. Employees, he added, would find desk notes asking, 'What will you break today?' at the peak of this mindset's internal popularity.

Zuckerberg, during his testimony on Wednesday, highlighted internal documents from around 2019 that showed employee disagreements with his decisions, portraying Meta as a place that fosters diverse opinions. Boland countered this narrative, stating that while such openness may have existed earlier in his tenure, the company later developed 'a very closed down culture.' He alleged that Zuckerberg set clear priorities in all-hands meetings, such as shifting to mobile-first products or outpacing competitors. Boland recalled a specific instance when Zuckerberg launched a 'lockdown' mode in response to rumors of a Google social network—likely referring to Google+—complete with a digital countdown clock in the office tracking progress toward goals. Notably, he said, there was never a similar lockdown dedicated to user safety initiatives.

Meta has consistently denied prioritizing engagement over safety, with both Zuckerberg and Instagram head Adam Mosseri testifying in recent weeks that creating platforms where users 'feel good' aligns with the company's long-term interests. Mosseri and Zuckerberg argued that enjoyable experiences drive sustained usage and revenue. Boland, however, disputed this directly. 'My experience was that when there were opportunities to really try to understand what the products might be doing harmfully in the world, that those were not the priority,' he said. 'Those were more of a problem than an opportunity to fix.' Instead, when safety concerns arose via media reports or regulators, the response focused on 'managing through the press cycle' rather than deep investigations, according to Boland.

Though Boland led an advertising team that he encouraged to proactively identify 'broken parts' in the system, he testified that this self-reflective approach did not permeate the broader organization. He emphasized Zuckerberg's influence from the top down, instilling in engineers that 'the priorities were on winning growth and engagement.' This culture, Boland claimed, extended to the platforms' core algorithms, which he described as having 'an immense amount of power' and being 'absolutely relentless' in chasing engagement metrics. 'There’s not a moral algorithm, that’s not a thing … Doesn’t eat, doesn’t sleep, doesn’t care,' Boland stated, underscoring how these systems pursue programmed goals without ethical considerations.

The testimony also touched on Boland's direct interactions with Zuckerberg. During direct examination by lead plaintiff attorney Mark Lanier, Boland revealed he had raised concerns about data showing 'harmful outcomes' from the algorithms, urging further investigation. Zuckerberg's response, according to Boland, was dismissive: something to the effect of, 'I hope there’s still things you’re proud of.' Shortly after, Boland decided to leave the company, forgoing upwards of $10 million in unvested stock despite having earned more over his career. He described speaking out against Meta as 'nerve-wracking,' noting, 'This is an incredibly powerful company.'

Meta's legal team sought to undermine Boland's credibility during cross-examination. Attorney Phyllis Jones pointed out that Boland never worked on youth safety teams and acknowledged that advertising models and algorithms are not inherently problematic. He also conceded that many of his worries involved user-posted content, which falls outside the trial's scope since the jury can only consider Meta's own decisions. Additionally, Zuckerberg had remarked on Wednesday that Boland 'developed some strong political opinions' toward the end of his tenure, though neither elaborated in court. A 2025 blog post by Boland, referenced in reports, indicated his disillusionment included Meta's handling of events like the January 6, 2021, Capitol riot, where he accused the platform of amplifying 'Stop the Steal' propaganda.

To bolster Boland's standing, Lanier presented evidence of his respected status at Meta, including a glowing departure statement from his then-boss quoted in a CNBC article and an unnamed source praising his 'strong moral character.' The judge has generally allowed the term 'whistleblower' to describe Boland, despite Meta's efforts to restrict it to avoid prejudicing the jury. This trial is part of a broader wave of litigation against tech giants, including cases over addiction, safety, and mental health impacts of social media, as detailed in ongoing coverage of 'Social media on trial.'

The case unfolds amid heightened scrutiny of Big Tech's role in youth mental health. A 2023 U.S. Surgeon General's advisory warned of social media's potential harms to adolescents, citing studies linking heavy use to increased anxiety and depression. Meta has faced multiple lawsuits and congressional hearings on these issues, with internal documents leaked in prior cases like the 2021 Facebook Files revealing awareness of Instagram's negative effects on teen girls. Boland's testimony adds to this mosaic, offering a former executive's perspective on how profit motives allegedly shaped product design.

Plaintiffs argue that features like infinite scrolling, notifications, and recommendation algorithms on Instagram and Facebook were engineered to maximize time spent, drawing in vulnerable users including teens despite known risks. Meta counters that it has invested billions in safety tools, such as parental controls and content filters, and that correlation does not prove causation in mental health harms. The trial, presided over in Los Angeles Superior Court, is expected to continue for several more weeks, with expert witnesses on both sides yet to testify.

As the proceedings progress, Boland's account highlights tensions within Silicon Valley's ad-driven ecosystem. His departure in 2020 came during a period of internal reckoning for Meta, following scandals over misinformation and privacy. While Boland admitted earning substantial compensation during his time there, his choice to walk away from significant equity underscores the depth of his convictions. For the young woman at the trial's center—whose identity remains protected—the outcome could set precedents for holding platforms accountable for algorithmic harms.

Looking ahead, this lawsuit joins others targeting YouTube and TikTok, potentially influencing how tech companies design for younger users. Regulators in the European Union and U.S. states like California are advancing laws requiring safety assessments for minors. Meta, valued at over $1 trillion, maintains that its innovations connect billions positively, but voices like Boland's suggest the human cost of unchecked growth. The jury's deliberations, whenever they begin, will weigh these competing narratives in a verdict that could ripple across the industry.

Share: