Pentagon Signs AI Deals With Seven Companies — Anthropic Excluded Over Safety-Guardrail Dispute
The U.S. Department of Defense formalized AI procurement agreements with seven technology firms on May 1 — Google, Microsoft, OpenAI, SpaceX, Amazon Web Services, Oracle, and Nvidia — while Anthropic remains locked out due to its earlier designation as a supply-chain risk. That label, typically reserved for companies tied to foreign adversaries, was applied to Anthropic after the company insisted the Pentagon include explicit safety guardrails governing how Claude could be used in lethal-force decision loops. Anthropic's CFO Krishna Rao had previously warned in a regulatory filing that the designation could cost the company "multiple billions of dollars" in 2026 revenue.
Background: How Anthropic became a supply-chain risk
- Earlier in the year, the Department of War (as the Pentagon rebranded itself) designated Anthropic a supply-chain risk after negotiations over AI safety requirements in defense contracts broke down.
- Anthropic requested that any deployment in warfare-adjacent contexts include Constitutional AI constraints preventing autonomous lethal targeting recommendations — the DoD declined to accept those terms as binding contract language.
- The designation triggered a mandatory review by military commands and set a 180-day clock (from May) for removing any existing Anthropic products from DoD systems.
What the seven-company deal includes
The agreements signed May 1 cover classified network access, autonomous task execution for logistics and intelligence analysis, and code generation for defense software modernization programmes. SpaceX's inclusion is notable given its recent Colossus 1 data-center deal with Anthropic — the two agreements are commercially separate, but the optics of renting compute from a DoD-contracted company while Anthropic itself is excluded from DoD work illustrate how complex the cross-company relationships in the AI sector have become.
Anthropic's exclusion is a reminder that AI vendor risk assessment must now include regulatory posture and government policy positions — not just API reliability and model quality. If your organization has U.S. federal contracts or subcontracts, any AI tool in your stack that touches covered work should be assessed against current DoD and GSA approved vendor lists. Anthropic is not currently on those lists.
White House re-engagement opens a possible path
Axios reported the same day that the White House had quietly reopened dialogue with Anthropic in recent weeks, following Dario Amodei's visit with Chief of Staff Susie Wiles after the Mythos cybersecurity model preview. The discussions focus on whether a narrower set of safety commitments — rather than blanket contract language — could satisfy DoD requirements. No timeline has been confirmed, and the supply-chain designation remains in force.