How a standoff between Silicon Valley and the Pentagon is forcing every major AI company to declare what its technology will — and won't — be used for.

By the time Friday night ended, the AI industry had been split down the middle — and then, unexpectedly, stitched back together by something it rarely shows: a conscience.

It began with Anthropic, the AI safety company behind the Claude assistant, and a refusal. For months, U.S. defense officials had been pressuring the company to strip away ethical guardrails from its systems — guardrails that prevent Claude from being used to autonomously kill people or surveil citizens at scale. Anthropic said no. Then it kept saying no. And then, on Friday, the White House decided it had heard enough.

President Trump took to Truth Social with his characteristic flair for the dramatic. Calling Anthropic's leadership "leftwing nut jobs," he directed every federal agency to immediately cease using the company's technology. It was a swift and sweeping punishment — the kind usually reserved for geopolitical adversaries, not American tech startups.

A Rival Steps In

Within hours, OpenAI's CEO Sam Altman was on X with an announcement that would have seemed unthinkable just days earlier. His company, OpenAI — Anthropic's fiercest commercial rival — had struck a deal to supply AI to classified Pentagon networks. But here was the twist: Altman claimed the deal came with the very same ethical protections Anthropic had fought for and been punished over.

"Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force," Altman wrote, adding that the Pentagon "agrees with these principles" and that those commitments were embedded in the contract itself.

If true, it meant OpenAI had achieved in negotiations what Anthropic couldn't — or perhaps that the administration, having made its point with one company, was now willing to accept the very terms it had just rejected. The irony was hard to miss.

The Open Letter That Nobody Expected

What made the story stranger still was what was happening among the rank and file. Despite the public rivalry between their companies, nearly 500 employees from OpenAI and Google signed an open letter standing behind Anthropic. "We will not be divided," the letter declared — a direct message to a Pentagon that had apparently been playing competitors against each other, hoping that fear of losing contracts would erode each company's principles.

"The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused," the letter read. "They're trying to divide each company with fear that the other will give in."

Altman, for his part, seemed aware that his own employees were watching closely. In a late-night memo obtained by Axios, he sought to reassure them that OpenAI had not abandoned its principles — that the company's "red lines" around mass surveillance and autonomous lethal weapons remained intact.

Anthropic's Unmoving Position

Meanwhile, Anthropic — freshly ejected from the federal government's supplier list — showed no sign of wavering. In a statement that was equal parts defiant and measured, the company said it had tried in good faith for months to reach a workable agreement. It insisted that its two exceptions — no mass domestic surveillance, no fully autonomous weapons — were narrow enough that they hadn't impacted a single government mission.

"No amount of intimidation or punishment," the company said, "will change our position."

It was a remarkable statement from a company that had just been effectively blacklisted by the U.S. government — and one that, depending on how the next few weeks unfold, may come to define how the industry navigates one of its most consequential questions: who ultimately controls what AI can be made to do?

The Bigger Stakes

Hovering over all of this was a number that put the commercial pressures into sharp relief. Also on Friday, OpenAI announced a $110 billion fundraising round — a staggering sum that would value the company at $840 billion. The AI arms race, both technological and geopolitical, is accelerating at a pace that makes every ethical decision feel both more urgent and more expensive.

The question the industry now faces isn't just about contracts or compliance. It's about whether the guardrails that AI companies build into their products are principles — or just policies. And whether, when power applies enough pressure, the difference between those two things holds.

For now, at least, Anthropic seems to believe it does.

Source: https://www.theguardian.com/technology/2026/feb/28/openai-us-military-anthropic