Home » Justice Department Says Anthropic Can’t Be Trusted With Warfighting Systems

Justice Department Says Anthropic Can’t Be Trusted With Warfighting Systems

by Adrian Russell


The Trump administration argued in a court filing on Tuesday that it did not violate Anthropic’s First Amendment rights by designating the AI developer a supply-chain risk and predicted that the company’s lawsuit against the government will fail.

“The First Amendment is not a license to unilaterally impose contract terms on the government, and Anthropic cites nothing to support such a radical conclusion,” US Department of Justice attorneys wrote.

The response was filed in a federal court in San Francisco, one of two venues where Anthropic is challenging the Pentagon’s decision to sanction the company with a label that can bar companies from defense contracts over concerns about potential security vulnerabilities. Anthropic argues the Trump administration overstepped its authority in applying the label and preventing the company’s technologies from being used inside the department. If the designation holds, Anthropic could lose up to billions of dollars in expected revenue this year.

Anthropic wants to resume business as usual until the litigation is resolved. Rita Lin, the judge overseeing the San Francisco case, has scheduled a hearing for next Tuesday to decide whether to honor Anthropic’s request.

Justice Department attorneys, writing for the Department of Defense and other agencies in the Tuesday filing, described Anthropic’s concerns about potentially losing business as “legally insufficient to constitute irreparable injury” and called on Lin to deny the company a reprieve.

The attorneys also wrote that the Trump administration was motivated to act because of “concerns about Anthropic’s potential future conduct if it retained access” to government technology systems. “No one has purported to restrict Anthropic’s expressive activity,” they wrote.

The government argues that Anthropic’s push to limit how the Pentagon can use its AI technology led defense secretary Pete Hegseth to “reasonably” determine that “Anthropic staff might sabotage, maliciously introduce unwanted function, or otherwise subvert the design, integrity, or operation of a national security system.”

The Department of Defense and Anthropic have been fighting over potential restrictions on the company’s Claude AI models. Anthropic believes its models shouldn’t be used to facilitate broad surveillance of Americans and are not currently reliable enough to power fully autonomous weapons.

Several legal experts previously told WIRED that Anthropic has a strong argument that the supply-chain measure amounts to illegal retaliation. But courts often favor national security arguments from the government, and Pentagon officials have described Anthropic as a contractor that has gone rogue and that its technologies cannot be trusted.

“In particular, DoW became concerned that allowing Anthropic continued access to DoW’s technical and operational warfighting infrastructure would introduce unacceptable risk into DoW supply chains,” Tuesday’s filing states. “AI systems are acutely vulnerable to manipulation, and Anthropic could attempt to disable its technology or preemptively alter the behavior of its model either before or during ongoing warfighting operations, if Anthropic—in its discretion—feels that its corporate ‘red lines’ are being crossed.”

The Defense Department and other federal agencies are working to replace Anthropic’s AI tools with products from competing tech companies in the next few months. One of the military’s top uses of Claude is through Palantir data analysis software, people familiar with the matter have told WIRED.

In Tuesday’s filing, the lawyers argued that the Pentagon “cannot simply flip a switch at a time when Anthropic currently is the only AI model cleared for use” on the department’s’s “classified systems and high-intensity combat operations are underway.” The department is working to deploy AI systems from Google, OpenAI, and xAI as alternatives.

A number of companies and groups, including AI researchers, Microsoft, a federal employee labor union, and former military leaders have filed court briefs in support of Anthropic. None have been filed in support of the government.

Anthropic has until Friday to file a counter response to the government’s arguments.



Source link

You may also like

© 2025 cryptopulsedaily.xyz. All rights reserved