Anthropic has found itself, reluctantly, as one of the only checks on the military's expanding AI ambitions — a role no private company was built to play

Anthropic, the AI company behind the Claude chatbot, has gone from a relatively quiet player in the artificial intelligence industry to the center of one of Silicon Valley's most dramatic confrontations with the Trump administration.

The conflict stems from Anthropic's refusal to allow Claude to be used for domestic mass surveillance and autonomous weapons systems capable of killing without human input. After the company rejected a Pentagon deadline, Defense Secretary Pete Hegseth accused Anthropic of "arrogance and betrayal," and the Department of Defense formally designated the company a supply-chain risk — an unprecedented move against an American firm — demanding other businesses sever ties with it.

The fallout has been chaotic. OpenAI struck its own deal with the DoD, prompting CEO Dario Amodei to accuse rival Sam Altman of giving "dictator-style praise" to Donald Trump, for which Amodei later apologized. Trump said he "fired them like dogs." Anthropic has pledged to challenge the supply-chain designation in court while reportedly reopening negotiations with the Pentagon.

The standoff exposes deep contradictions at Anthropic's core. Founded in 2021 by Dario and Daniela Amodei after they departed OpenAI, the company was built on a mission of AI safety and responsible development. Yet it has simultaneously partnered with defense giant Palantir, allowing Claude to be embedded in classified military systems — including operations to determine bombing targets in Iran.

The crux of the dispute is that Anthropic's original military contracts did not lock in permanent safety guardrails. As the Pentagon pushed to loosen restrictions for a wider range of use, Anthropic hit its limits.

The situation highlights a broader problem: dual-use technology developed for consumers can be repurposed in ways its creators oppose, but with little ability to stop. There is also what experts call a "double black box" — tech companies can't fully see how their tools are used in classified environments, while the military doesn't fully understand how the AI actually works.

With Congress yet to regulate autonomous weapons, Anthropic has found itself, reluctantly, as one of the only checks on the military's expanding AI ambitions.