In a significant development, President Donald Trump announced on Friday his directive for all federal agencies to gradually discontinue the use of Anthropic technology. This decision comes in the wake of a public disagreement between Anthropic and the Pentagon regarding the safety and ethical implications of artificial intelligence (AI).
Trump’s remarks were made just before the Pentagon’s deadline for Anthropic, pushing the company to allow unrestricted military access to its AI solutions or face repercussions. This ultimatum followed a statement from Anthropic CEO Dario Amodei, who expressed that the company “cannot in good conscience accede” to the demands laid out by the Defense Department.
AI Safety and National Security Concerns
The heart of the matter revolves around the use of AI in defense contracts, specifically regarding its implications for national security. Key concerns include the potential use of highly capable AI systems in scenarios involving lethal force or sensitive personal data, such as government surveillance.
While Anthropic, known for its chatbot Claude, might manage without the Pentagon contract, the ultimatum from Defense Secretary Pete Hegseth carries broader implications. The stakes are particularly high for a company that has rapidly evolved from a lesser-known research lab in San Francisco to one of the most valuable startups in the tech industry.
If Amodei stands by his decision, military officials have indicated they may not only terminate the contract but also label Anthropic as a supply chain risk—a designation typically assigned to foreign adversaries, jeopardizing the company’s critical collaborations with partners in the industry.
Anthropic’s Stance on Safeguards
Anthropic’s position has been one of caution. The company has sought specific assurances from the Pentagon that its AI technology would not be utilized for mass surveillance of American citizens or deployed in fully autonomous weaponry. However, after months of private negotiations turning contentious, Anthropic condemned the latest contract language as a facade, warning that it would permit the undermining of previously agreed-upon safeguards.
The Pentagon’s spokesperson, Sean Parnell, attempted to clarify the military’s intentions, claiming that the Department has no interest in using AI for illegal surveillance and does not aim to create autonomous weapons operating without human oversight. Yet, the details of how the Pentagon intends to employ Anthropic’s technology remain vague, raising further questions among industry observers.
The Broader Industry Impact
The conflict is polarizing the tech ecosystem further. Emil Michael, the undersecretary for research and engineering, criticized Amodei on social media, accusing him of seeking control over the U.S. Military to the detriment of national safety. However, this perspective has found limited resonance among many in Silicon Valley. As support for Amodei’s stance grows, employees from competitors like OpenAI and Google have expressed solidarity through an open letter.
Companies like OpenAI and Google, which also have military contracts, find themselves caught in the crossfire of this debate. Elon Musk joined the discourse, labeling Anthropic’s approach as hostile to “Western Civilization,” a sentiment echoed following the discovery of Claude’s guiding principles that emphasized non-Western perspectives.
Interestingly, OpenAI’s CEO Sam Altman, a former colleague of Amodei’s, sided with Anthropic, highlighting the broader agreement within the tech community around safety protocols. Altman voiced concerns about the Pentagon’s “threatening” tactics, suggesting a shared commitment among AI companies to uphold certain ethical standards.
Additionally, lawmakers from both sides of the aisle, along with former officials like retired Air Force General Jack Shanahan, have raised alarms about the Pentagon’s approach. Shanahan noted the irony in targeting Anthropic—a strategy that has potential ramifications for everyone involved, as it recalls the earlier backlash faced by Google during its involvement in Project Maven.
Consequences and Implications for Anthropic
The Pentagon’s firm stance illustrates its intent to avoid any scenario where a company dictates military operational decisions. Parnell reiterated that Anthropic has until 5:01 p.m. ET on Friday to comply or face significant consequences, including possible cancellation of their contract.
During a recent meeting between Hegseth and Amodei, military representatives warned of severe repercussions, which could involve invoking the Defense Production Act to sidestep Anthropic’s consent. Amodei countered these threats, labeling them contradictory, as one position casts the company as a security risk while simultaneously deeming its technology critical to national defense.
Amodei’s hope is for the Pentagon to reconsider, underscoring the essential role of Claude in military applications. If an agreement can’t be reached, he indicated that Anthropic would take steps to transition smoothly to another provider.


