Anthropic’s engagement with the US government

 5 min read

YouTube video ID: MPTNHrq_4LU

Source: YouTube video by CBS NewsWatch original video

PDF

Anthropic presents itself as highly proactive in supporting US national security. The company says it was the first to place models on the classified cloud and to produce custom models for national security use. Those models are reported to be deployed across intelligence and military applications, including cyber and combat support operations.

The stated motivation is to defend the country from autocratic adversaries like China and Russia. Anthropic frames this engagement as both pragmatic and patriotic: "we have to defend our country. I believe we have to defend our country from autocratic adversaries like China and like Russia. And so we've been we've been very you know we've been very lean forward."

Anthropic’s red lines

Anthropic says it accepts 98–99% of the US government's desired AI use cases, but draws two firm lines it will not cross. These two red lines are domestic mass surveillance and fully autonomous weapons. The company emphasizes that these are longstanding, non-negotiable boundaries: "Our position is clear. We have these two red lines. We've had them from day one."

Red lineCore concern
Domestic mass surveillanceAI enabling mass analysis of data collected by private firms; technology outpacing law and Fourth Amendment interpretations
Fully autonomous weaponsWeapons firing without human involvement; current systems not reliable enough; oversight and accountability gaps

Anthropic frames these limits as tied to American values: the right not to be spied on and the need for human decision-making in war.

The negotiation and its breakdown

Anthropic reports that the Pentagon initially "agreed in principle" to the two restrictions but then presented an ultimatum with a very short negotiation window. The company says it was asked to agree to terms within a three-day period, a 3-day window described as an ultimatum. Anthropic felt the Pentagon’s proposed language failed to meaningfully commit to the restrictions, citing phrases like "if the Pentagon deems it appropriate."

A Pentagon spokesman, Sean Parnell, reiterated a stance of allowing "all lawful use," which Anthropic says did not address its exceptions. The company reports public accusations, including President Trump characterizing Anthropic’s stance as "selfishness" that endangers American lives.

Government actions and designation

Anthropic states that the Department of War designated it a "supply chain risk." The company notes that this type of designation is typically reserved for foreign adversaries and cites Kaspersky Labs as an example of a company formerly designated as a supply chain risk. Anthropic views the designation as unprecedented, punitive, and retaliatory against an American company.

The designation and related statements from the Secretary of Defense and others prompted concerns about overreach. Anthropic describes a Secretary Hegsth (Hagerty) tweet as inaccurate and as exceeding lawful authority, and characterizes the public messaging as designed to create fear, uncertainty, and doubt.

Anthropic’s response and commitments

Despite the dispute and the designation, Anthropic states it remains willing to support the Department of War under its stated red lines. The company says it offered to provide continuity, to prototype autonomous systems in a sandbox, and to facilitate offboarding so a competitor could onboard if necessary. Anthropic claims it has tried hard to reach a deal and remains willing to continue dialogue.

Anthropic stresses that an agreement requires both sides and that it is prepared to serve national security within the limits it has set: "An agreement requires both sides. Um, we for for our side are willing to serve the national security of this country." The company also reports concern about interruption of services if the designation forces sudden removal from systems.

Justification for the red lines

Anthropic grounds its stance in the rapid pace and novelty of AI. The company emphasizes that AI development is on an exponential trend, asserting that the amount of computation for models "doubles every four months." It contrasts AI’s speed with more established technologies like aircraft, arguing that laws and democratic oversight struggle to keep up.

The company argues the two restricted use cases implicate core American values and practical risks. For domestic mass surveillance, Anthropic says the technology is "getting ahead of the law" and that judicial and legislative frameworks have not caught up. For autonomous weapons, the company highlights reliability risks—"the AI systems of today are nowhere near reliable enough to make fully autonomous weapons"—and worries about oversight and accountability if machines perform lethal actions without human judgment.

Anthropic also frames these stances strategically: domestic mass surveillance "does not help the US catch up with its adversaries," and the company rejects a "race to the bottom" approach to warfare and policy.

Business impact and legal posture

Anthropic assesses the immediate impact of the supply chain designation as "fairly small," and expresses confidence in its ability to survive the dispute: they expect to be "fine." The company notes that, to date, it has received only public tweets rather than formal documentation of designation. If formal action is taken, Anthropic says it would challenge such actions in court.

Anthropic emphasizes its status as a private company that can choose customers and that there are alternatives in the market: "we are a private company, right? We can choose to sell or not sell whatever we want. There are other providers." The company also warns that forcing sudden removal could set military users back months.

Ideology, neutrality, and bipartisan engagement

Anthropic denies ideological or partisan motivations for its red lines. The company stresses neutrality and even-handedness, saying it speaks on AI policy where it has expertise and avoids general political positions. Anthropic cites participation in events with the President on energy for AI and support for the administration’s AI action plan as examples of bipartisan engagement.

The company frames disagreement with government policy as consistent with patriotic principles: "Disagreeing with the government is the most American thing in the world and we are patriots in everything we have done here." Anthropic maintains that their stance is about principle and democratic values rather than any particular person or administration.

  Takeaways

  • Anthropic says it supports nearly all US AI uses but draws firm red lines on domestic mass surveillance and fully autonomous weapons.
  • The company reports a negotiation breakdown after a three-day ultimatum and says Pentagon language did not meaningfully accept its exceptions.
  • The Department of War designated Anthropic a supply chain risk, a step the company describes as unprecedented and punitive against an American firm.
  • Anthropic justifies its stance by citing AI’s exponential pace, concerns about legal lag, reliability, oversight, and the preservation of democratic values.
  • Anthropic expects limited immediate business harm, remains willing to work under its terms, and would legally challenge formal adverse actions.

Frequently Asked Questions

Who is CBS News on YouTube?

CBS News is a YouTube channel that publishes videos on a range of topics. Browse more summaries from this channel below.

Does this page include the full transcript of the video?

Yes, the full transcript for this video is available on this page. Click 'Show transcript' in the sidebar to read it.

Helpful resources related to this video

If you want to practice or explore the concepts discussed in the video, these commonly used tools may help.

Links may be affiliate links. We only include resources that are genuinely relevant to the topic.

PDF