VANCOUVER, British Columbia — As generative artificial intelligence tools like ChatGPT weave deeper into daily life, a prominent tech journalist is urging the public not to succumb to fear but to actively challenge the powerful companies driving the technology's expansion.
Karen Hao, an MIT graduate and former reporter for The Wall Street Journal, addressed these concerns in a recent interview ahead of her appearance at the University of British Columbia's Chan Centre on March 12 at 7:30 p.m. There, she will converse with author Naomi Klein as part of the Pulitzer Spotlight Series, which trains journalists on covering AI. Tickets are available at chancentre.com. Hao's perspective is detailed in her bestselling book, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, published in May 2025, which chronicles the evolution of OpenAI from a nonprofit aimed at benefiting humanity to a for-profit powerhouse with significant societal repercussions.
"We should absolutely be skeptical of the companies and the people behind the technologies that are most popular today, like ChatGPT," Hao said. "We should be doing the opposite of just being scared and paralyzed. We should be thinking strategically about how we can assert our agency and push back."
Hao began covering OpenAI in 2019, tracking its transformation amid rapid AI growth that has raised alarms about inadequate safeguards. The technology's proliferation has sparked debates over its effects on mental health, the environment, and politics, with critics arguing that profit motives often eclipse public welfare. Hao's book delves into the origins of OpenAI and highlights negative impacts, including the environmental toll of energy-intensive data centers and the political influence wielded by tech leaders.
Despite the dominance of billionaires like OpenAI CEO Sam Altman, Elon Musk, and Peter Thiel, Hao insists resistance is viable. She cites the failure of former President Donald Trump's attempts to enact a 10-year moratorium on state AI laws as evidence of public influence. "There’s actually a perfect example of an instance in which people took agency and blocked this from happening," Hao said from her base in Hong Kong. "Trump tried this twice, and in both cases, it was hugely unpopular and it failed to pass Congress. And part of the reason was because people were calling their lawmakers like crazy, saying absolutely not. You should not be voting yes on this bill."
Building on that momentum, parents have filed lawsuits against AI companies, pressuring them and policymakers for regulations protecting children from mental health and suicide risks linked to the technology. "We see parents that are suing these companies and really pressuring these companies and pressuring policy-makers to actually pass some kind of sensible regulation at the state level for protecting their kids when it comes to mental-health issues and suicide issues," Hao noted.
Environmental activism has also slowed AI's advance. Protests have halted or delayed data center constructions in Arizona and Georgia, which are essential for powering AI systems. "Data centre build-outs are key to the acceleration of this AI technology, and if the company cannot build the data centres as fast as they want to build them, that also slows down their pace of AI development," Hao explained. "This has been another mechanism by which, even in the absence of laws, people have been able to successfully stall a key ingredient for these companies’ agendas and force the companies to actually answer more to the public."
Hao points to international regulations as a model for broader change, noting that global companies must comply with the strictest standards worldwide. She highlights the European Union's General Data Protection Regulation (GDPR), enacted in 2018, which governs personal data collection, use, and storage. "When Europe passed GDPR, the major privacy legislation, Americans ended up benefiting from it, too, even though it wasn’t in the U.S. that they passed it, because the companies had to build their platforms to be GDPR-compliant," she said.
In Canada, where U.S. tech firms are eyeing land and resources for data centers, Hao praises Prime Minister Mark Carney's firm stance against pressure from the south. On February 25, Canadian government members met with OpenAI officials following the company's decision to ban the ChatGPT account of Jesse Van Rootselaar, the perpetrator of the Tumbler Ridge mass shooting on February 10, without notifying law enforcement about its contents. The account had been banned in June 2025, prior to the incident. The government had previously announced work on a new online harms bill, but as of late February, no concrete proposals had emerged.
"One of the things that I think is remarkable about Canada right now is that the prime minister is one of the few world leaders that’s been willing to speak truth to power, and I think that means that there’s a huge amount of opportunity for Canadian citizens to be pushing through it," Hao said. "(Canada) is one of the few places where I think we can actually push through regulation. And regulation is still a lever that exists to be pulled for putting in safeguards."
Hao clarifies that her critique targets generative AI specifically, not all artificial intelligence. She uses generative tools only to understand them for her reporting and relies on other AI applications in her work. "I think it’s important to say I’m not critical of all AI. AI refers to such a vast array of different technologies," she emphasized. "For me, it’s about what kind of AI should we be developing? How do we design it?"
To foster accountability, Hao advocates for community-level discussions about AI deployment. "Get together a group of parents, or get together a group of other patients at the same doctor’s office to have an actual conversation about whether or not you want AI to be used in that circumstance, and how to use AI in that circumstance, and which company to buy the tools from," she suggested. "A small act of daily resistance that you can engage in is to not just accept it, but actually say, ‘Wait a minute, can we talk about it first and have a collective conversation within our community?’ … If everyone in the world did that, I think we would just get to a way better place with how AI is being developed."
"Empires act in the interest of only themselves, not others," Hao wrote in her book. "What I always try to convey is these companies are empires, and we need to start treating them that way. If we take that seriously, which I think we absolutely should, then we should recognize that the biggest threat that these companies have is to our democracy … Empire is built on hierarchy and democracy is built on equality."
In Empire of AI, Hao balances profiles of tech executives with stories of those harmed by their decisions, countering narratives that glorify leaders while ignoring exploitation. She draws from global travels where people express frustration that technology no longer serves them, particularly in its effects on children. "What you are feeling is the fact that most of the technology that we engage with today is built by people who are not building it for us," Hao observed. "So, you feel this disconnect between what you’re being told, which is that technology is supposed to benefit you, and what you’re actually feeling in your life is that you’re losing control and agency because of these technologies."
The book's thesis warns of empire-building as a core business model for these firms, potentially eroding democratic foundations. As AI integrates further into society—from healthcare to education—Hao's call for proactive engagement underscores ongoing debates. With events like her Vancouver discussion and legislative efforts in Canada and beyond, the push for ethical AI development continues to gain traction, offering hope that public action can shape the technology's trajectory.
While life without AI seems increasingly implausible, Hao's message resonates amid rising concerns. Her work, through journalism and her book, aims to empower individuals to question and influence how these tools are created and deployed, ensuring they align with societal needs rather than unchecked corporate ambitions.
