The Appleton Times

Truth. Honesty. Innovation.

Technology

Anthropic doesn’t trust the Pentagon, and neither should you

By Thomas Anderson

1 day ago

Share:
Anthropic doesn’t trust the Pentagon, and neither should you

Anthropic has sued the Pentagon over a supply chain risk designation, citing violations of its constitutional rights amid fears of AI-enabled mass surveillance. Drawing on NSA history from the Patriot Act to Snowden leaks, experts like Mike Masnick highlight decades of interpretive overreach that erodes trust in government promises.

In a escalating legal showdown that highlights tensions between artificial intelligence innovators and the U.S. military, Anthropic, the developer of the Claude AI model, has filed a lawsuit against the Pentagon, accusing it of violating the company's First and Fifth Amendment rights. The suit, filed recently, challenges the Department of Defense's designation of Anthropic as a supply chain risk, a label the company claims is an attempt to "seek to destroy the economic value created by one of the world’s fastest-growing private companies." This dispute, unfolding amid broader debates over AI's role in national security, has drawn attention to longstanding concerns about government surveillance practices, particularly those involving the National Security Agency (NSA).

The conflict stems from Anthropic's reluctance to engage in certain contracts with the government, including those that could enable expanded surveillance capabilities through AI. According to details emerging from the case, the Pentagon's risk designation came just days before Anthropic's legal action, prompting the company to argue that the move infringes on its free speech and due process protections. While the full scope of the contract negotiations remains under wraps, experts suggest the core issue revolves around Anthropic's "red lines"—firm boundaries against developing autonomous weapons and aiding mass surveillance.

Mike Masnick, founder and CEO of the tech policy site Techdirt, provided insight into the surveillance angle during a recent episode of The Verge's Decoder podcast. Masnick, who has covered digital privacy and government overreach for decades, emphasized the historical context that fuels Anthropic's distrust. "There’s what the law says the government can do when it comes to surveilling us, and then what the government wants to do," Masnick said. "And most importantly, there’s what the government says the law says it can do, which is often exactly the opposite of what any normal person simply reading the law would think."

The roots of these concerns trace back to the post-9/11 era, when the U.S. Congress passed the Patriot Act in October 2001 under President George W. Bush. Intended to combat terrorism, the legislation expanded surveillance powers, allowing the government broader access to communications data. Over time, these authorities were interpreted in ways that Masnick described as relying on an "NSA dictionary" distinct from everyday language. For instance, the term "target"—meant to limit surveillance to non-U.S. persons—has been stretched to include any communication mentioning a foreign entity, even if it involves Americans.

This interpretive flexibility was starkly revealed in 2013 through leaks by Edward Snowden, a former NSA contractor. Snowden's disclosures, published by journalists like Glenn Greenwald, Barton Gellman, and Laura Poitras, exposed programs such as PRISM, which collected vast amounts of internet data from tech companies. The revelations prompted widespread outrage and reforms, including the USA Freedom Act of 2015, which curbed some bulk collection practices. Yet, Masnick noted that core issues persist, particularly under Executive Order 12333, signed by President Ronald Reagan in 1981.

Executive Order 12333 outlines guidelines for intelligence activities, ostensibly restricting surveillance of U.S. persons. However, as technology evolved with the internet's growth, the NSA leveraged it to intercept communications routed abroad—even domestic ones. "If I’m texting you and a message went from me in California through a fiber optic cable that happened to leave the U.S., the NSA could put a tap in the part once it’s outside the U.S. and collect that information, even if it was just going to you within the U.S.," Masnick explained. The agency could then retain and search this data, often through so-called "backdoor searches," despite promises to minimize collection on Americans.

Such practices have spanned administrations of both parties. In a notable 2012 Senate hearing, Director of National Intelligence James Clapper testified that the NSA does not "wittingly" collect data on millions of Americans—a statement later contradicted by Snowden's leaks and deemed misleading by critics, including Senator Ron Wyden, a Democrat from Oregon. Wyden had repeatedly raised alarms on the Senate floor, hinting at undisclosed overreach without specifics due to classification barriers. Masnick highlighted this bipartisan pattern: "There are a lot of incremental bad things under presidents of both parties, under congresses of both parties."

The Foreign Intelligence Surveillance Court (FISC), established by the Foreign Intelligence Surveillance Act of 1978, was meant to oversee such activities. Yet, it has operated largely in secret, with only the government's side presenting cases. Historical data shows approval rates exceeding 99 percent for surveillance warrants, leading critics to call it a "rubber stamp." Masnick attributed this to the lack of adversarial proceedings: "One of the problems with the intelligence community and the setup of it is that you don’t have that adversarial situation. That makes it easier for one side to justify the argument that they’re making."

Anthropic's stance reflects these historical precedents. The company, founded in 2021 by former OpenAI executives including Dario and Daniela Amodei, has positioned itself as safety-focused, declining deals that cross its ethical boundaries. In blog posts and public statements, Anthropic has cited fears that AI could supercharge surveillance, potentially analyzing patterns in the massive datasets already hoarded by agencies like the NSA. The Pentagon, for its part, has not publicly detailed its rationale for the supply chain risk label, but officials have emphasized the need for secure AI technologies in defense applications.

The lawsuit adds a new layer to ongoing debates about AI governance. While Anthropic pushes back, competitors like OpenAI have pursued government contracts, including partnerships with the Defense Department for tools like chatbots tailored for military use. OpenAI's shift came after initially restricting military applications, a decision CEO Sam Altman announced in 2024 amid pressure to support national security efforts. Anthropic, however, views the Pentagon's actions as punitive, potentially steering business toward rivals.

Legal experts anticipate the case could drag on for months, with hearings likely in federal court in Washington, D.C. The complaint alleges economic sabotage, pointing to Anthropic's rapid growth—valued at over $18 billion in recent funding rounds—as evidence of the designation's impact. "We’re going to be talking about the twists and turns of that case on The Verge and here on Decoder in the months to come," said Nilay Patel, The Verge's editor-in-chief, during the podcast introduction.

Beyond the courtroom, the dispute underscores broader implications for AI's integration into surveillance. With AI models capable of processing petabytes of data, critics worry about an unprecedented expansion of the "surveillance state." Masnick warned that both Democrats and Republicans have incrementally enlarged these powers, driven by fears of terrorism. "Nobody, and certainly no president, wants to be president during the time when there’s a big terrorist attack," he said, noting how administrations bend legal interpretations to avoid political fallout.

As the case progresses, it may force greater transparency on how AI intersects with intelligence gathering. The NSA, part of the Department of Defense, continues to evolve its programs, with recent reports indicating increased use of machine learning for threat detection. Anthropic's challenge could set precedents for other tech firms wary of military entanglements, especially in an era where AI development races ahead of regulation.

Looking ahead, observers expect more public discourse, fueled by social media and press briefings. The Trump administration's influence lingers in policy circles, though current dynamics play out under President Biden. Whatever the outcome, the Anthropic-Pentagon clash serves as a flashpoint, reminding stakeholders of the delicate balance between innovation, privacy, and security in America's digital landscape.

In the end, this legal battle may not resolve the deeper trust deficit. As Masnick put it, the government's history of redefining surveillance terms—from "target" to encompass incidental U.S. data collection—leaves little room for complacency. For companies like Anthropic, the fight is as much about preserving principles as protecting profits, in a surveillance ecosystem built on decades of quiet expansions.

Share: