Author: James Swenson, Managing Director, Ethixbase360
In late February 2026, the US Department of Defense (DoD) designated the artificial intelligence (AI) company Anthropic a supply chain risk, a first for a US company. The dispute could have ended with the cancellation of Anthropic’s contract, but the situation escalated. As a result, the Administration sent a signal that any company doing business with the US Government (USG) must now evaluate.
Background
The timeline of this case is well-documented, but worth briefly recapping. In 2024, Anthropic became the first AI company to deploy on USG classified networks, at the time announcing exceptions from its usage policy to accommodate the work. In July 2025, Anthropic and DoD signed a two-year contract with a $200 million ceiling to “prototype frontier AI capabilities that advance US national security.” However, the relationship soured after the Pentagon appeared to use Anthropic’s AI model, Claude, in the capture of Venezuela’s then-President Maduro.
When Anthropic objected to the use of Claude in that operation, both Defense Secretary Pete Hegseth and President Donald Trump insisted that DoD have unfettered access to Claude’s capabilities. Anthropic CEO Dario Amodei would not agree to the firm’s technology being used for mass domestic surveillance or fully autonomous weapons, at least at the current time. When talks failed to yield progress, President Trump ordered all federal agencies to stop using Claude. Secretary Hegseth then designated the company a supply chain risk, and formal notification to Anthropic was provided in early March. DoD invoked two laws to support the designation: (i) 10 U.S.C. § 3252; and (ii) the Federal Acquisition Supply Chain Security Act of 2018 (FASCSA).
Legal Process
In late March, California federal judge Rita Lin issued a preliminary injunction blocking enforcement of the DoD’s supply chain risk designation under 10 U.S.C. § 3252, giving the company some reprieve. But the case is far from resolved. The Pentagon has indicated it may appeal the California decision, and a separate challenge to the designation under the FASCSA remains pending before a Washington, DC Circuit Court of Appeals panel. In early April, that court denied Anthropic’s request to temporarily block the FASCSA-based designation, meaning Anthropic remains excluded from DoD contracts while the merits of that case are still being heard. The result is a split legal landscape: Anthropic retains the ability to work with civilian agencies under the California injunction, but the Pentagon exclusion stands for now.
Analysis
To a degree, this situation is an anomaly. Software firms typically make their products available through end-user license agreements that allow usage restrictions. The USG is effectively claiming an exemption. As explored in detail here, the USG is insisting on the use of AI systems for “any lawful government purpose.” The General Services Administration has incorporated this idea in a new standard contract clause in the Multiple Award Schedule, which it uses to negotiate pricing and terms. Among other elements, the clause would prevent contractors from taking Anthropic’s approach.
This reflects a broader shift in how the Administration is framing AI governance: as a matter of national competitiveness and operational control. Companies across the economy should pay careful attention to two of President Trump’s Executive Orders (EOs): the January 2025 EO on “Removing Barriers to American Leadership in Artificial Intelligence” and the July 2025 EO on “Preventing Woke AI in the Federal Government.” These can help to understand how the Administration is thinking in terms of its relationship with the private sector. Those Executive Orders signal that the Administration considers AI usage restrictions imposed by private firms as barriers to American leadership; a framing with direct implications for any company whose AI governance policies conflict with government procurement expectations.
Implications for Third Parties
From a compliance point of view, this case raises a number of issues. If the designation is upheld, the ripple effects will spread widely across the technology ecosystem, government agencies, contractors, grantees and more. A firm that still tries to maintain guardrails on its AI systems could face serious legal repercussions or risks from internal whistleblowers. Firms must carefully understand their commercial commitments to the Government and disclose concerns. Critically, under FASCSA, the designation flows down through contracting structures via FAR clause 52.204-30, which is incorporated into all subcontracts. If companies find that USG procurement clauses override their licensing agreements or violate internal company policies, and they are not willing to bend, they might be advised to refrain from engaging in USG contracting.
This means suppliers, technology vendors, and subcontractors embedded in DoD supply chains may inherit restrictions on Claude usage even without a direct government contract. Compliance teams should treat this as a third-party risk management issue: audit AI tool usage across the full enterprise and supply chain, maintain visibility into ownership and licensing terms for all AI products in use, and establish ongoing monitoring to catch changes in designation status that could affect downstream partners. Among other steps, contractors must inventory their use of Claude, create contingency plans and should consider preparing to file a Request for Equitable Adjustment to compensate for transition costs.
If the DC circuit court lifts the designation, it could still be difficult to reverse commercial processes already set in motion. As Judge Lin wrote, “Everyone, including Anthropic, agrees that [DoD] may permissibly stop using Claude and look for a new AI vendor who will allow ‘all lawful uses’ of its technology.” The USG could also find other ways to bar its contractors and subcontractors from using Claude. In fact, as Claude remains banned for DoD contractors, other agencies were reported to be phasing it out.
Looking Ahead
As of this writing, there are now some signs that the Administration may soften its posture. In early April, Anthropic announced that its new model, Mythos, has capabilities that pose a risk to cybersecurity. In response, the company rolled out an initiative called Project Glasswing, to identify critical software vulnerabilities across major operating systems. This news attracted interest from the Cybersecurity and Infrastructure Security Agency, the Treasury Department, the Federal Reserve and major banks. On April 18, Amodei met with White House Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent, and a version of Mythos may be made available to major federal agencies. However, the Pentagon dispute remains unresolved. This raises an important question: how can some USG agencies, including reportedly the National Security Agency, be working with Anthropic’s tools if the company is a designated supply chain risk?
As a final risk, both technology companies and those that use impacted technologies should consider that compliance with this Administration’s approach could lead to future blowback from stakeholders or consumers, or even legal action in the event of political or policy changes. The commercial implications of working with an Administration considered polarizing should not be underestimated.