The ultimatum and Anthropic’s refusal
The Department of War issued a stark demand to Anthropic: "Remove all guard rails from their models to be able to be used by the Department of War or get blacklisted." That ultimatum framed the dispute between the government and the AI lab.
Anthropic and its representative Daario refused the demand. They explicitly declined to remove guardrails that would enable uses they consider unacceptable.
Anthropic’s existing relationship with the DoD
Anthropic signed a contract with the Department of Defense in July worth $200 million. The company was the first AI lab reported to integrate models into mission workflows on classified networks.
Reports indicate Anthropic’s models have already been accessed for defense purposes. The relationship with the DoD was the context in which the guardrail dispute escalated.
Pentagon demands and justifications
The Pentagon insisted it needed Anthropic to provide models without certain constraints so the military could use them for defensive purposes. Officials framed the demand as requiring "lawful AI use without constraints."
At the same time, the DoD stated it had no interest in using AI for mass surveillance or developing autonomous weapons. That claim was presented as justification for requesting guardrail removals.
Anthropic’s red lines
Anthropic identified two absolute red lines. The company would not permit its models to be used for "using AI, specifically the Cloud family of models for mass surveillance against the American people." It also refused "using AI to develop fully autonomous weapons with no human in the loop."
Those red lines shaped Anthropic’s refusal and informed the terms it sought to preserve while continuing to work with the DoD.
Pentagon threats
Officials responded to Anthropic’s refusal with a series of threats. They warned they would categorize Anthropic as a supply chain risk, a designation that, as reported, has "never been used for an American company."
The DoD also threatened to cancel the $200 million contract and to invoke the Defense Production Act (DPA) — the 1950 law that gives the president authority to direct private industry for national defense — to compel unrestricted provision of models.
Background and context
In July, Anthropic joined three other AI companies — OpenAI, DeepMind, and XAI — in securing contracts for frontier AI capabilities, each worth up to $200 million. In January, the Pentagon reportedly used Anthropic's Claude model for a mission in Venezuela via a contract with Palantir.
Anthropic’s usage guidelines explicitly prohibit Claude from facilitating violence, developing weapons, or conducting surveillance. Negotiations between Anthropic and the DoD had been ongoing for months when the ultimatum arrived.
Daario’s letter and arguments
Daario wrote that Anthropic supports using AI for defense but warned some uses would undermine democratic values or exceed current safe capabilities. The letter argued frontier AI systems "are simply not reliable enough to power fully autonomous weapons."
Anthropic highlighted technical unreliability such as hallucinations and the resultant risk to warfighters. The company emphasized that autonomous systems without human oversight cannot reliably exercise critical judgment.
Anthropic stated a "strong preference is to continue to serve the department and our war fighters with our two requested safeguards in place," and offered to facilitate a smooth transition if the DoD chose to offboard.
Contradictions in the DoD’s position
Observers noted contradictions between the Pentagon’s statements and its demands. The DoD claimed it would not use AI for mass surveillance or fully autonomous weapons while simultaneously asking Anthropic to remove guardrails that block those uses.
The DoD also sought to compel Anthropic to work with the department while threatening to label the company a security risk, creating tension between reliance on Anthropic’s services and punitive measures.
DoD counterarguments and concessions
The DoD argued the military should be trusted to "do the right thing." Officials offered to acknowledge in writing federal laws restricting military surveillance of Americans and to include language acknowledging Pentagon policies on autonomous weapons.
DoD representatives noted existing legal and policy constraints and argued that restating them in new safeguards was unnecessary. Anthropic judged those concessions inadequate.
Involvement of other AI companies
OpenAI’s CEO Sam Altman said OpenAI is working with the Pentagon while maintaining safety guardrails. Altman agreed with Anthropic’s red lines on mass surveillance and autonomous lethal weapons, stating trust in Anthropic to make its own decision.
OpenAI framed the dispute in part as one about control. Around 200 engineers from top AI companies signed a letter supporting Anthropic’s refusal.
Political intervention
Former President Trump issued a directive instructing federal agencies to cease all use of Anthropic’s technology. He stated, "The United States of America will never allow a radical leftworld company to dictate how our great military fights and wins wars," framing the matter in political terms.
Reports suggested political pressure factored into the escalation between Anthropic and the DoD.
Latest escalation and supply chain designation
Pete Hegsth characterized Anthropic as arrogant and a betrayal, and directed the Department of War to designate Anthropic as a supply chain risk to national security effective immediately. He said, "Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic."
Officials emphasized the designation's significance: "This is the first time a US company has ever been designated a supply chain risk. This is massive." The prohibition affects contractors, suppliers, and partners that do business with the US military.
| Hard Facts | Details |
|---|---|
| Contract value | $200 million |
| Contract signed | July |
| Reported Pentagon use of Claude | January (Venezuela mission via Palantir) |
| Defense Production Act passed | 1950 |
| Engineers supporting Anthropic | 200 |
Takeaways
- The Department of War demanded Anthropic remove guardrails from its AI models or face blacklisting.
- Anthropic refused to remove safeguards, citing absolute red lines on mass surveillance and fully autonomous weapons without human oversight.
- The Pentagon threatened to cancel a $200 million contract, label Anthropic a supply chain risk, and invoke the Defense Production Act.
- Other AI companies and about 200 engineers publicly aligned with Anthropic, while political directives pushed agencies to cease using Anthropic technology.
- Pete Hegsth's supply chain designation barred DoD contractors from commercial activity with Anthropic, a first for a US company.
Frequently Asked Questions
Who is Matthew Berman on YouTube?
Matthew Berman is a YouTube channel that publishes videos on a range of topics. Browse more summaries from this channel below.
Does this page include the full transcript of the video?
Yes, the full transcript for this video is available on this page. Click 'Show transcript' in the sidebar to read it.
Helpful resources related to this video
If you want to practice or explore the concepts discussed in the video, these commonly used tools may help.
Links may be affiliate links. We only include resources that are genuinely relevant to the topic.