The Jokic Ejection: Why Anthropic Got Tossed By The Pentagon
Anthropic just lost a major Pentagon AI contract. This outcome highlights the growing tension between ethical AI development and government demands. They might have the MVP stats, but the refs don't care about your True Shooting percentage. Let's examine the details, and consider the competitive pressures at play...
The Clause That Broke the Deal
So what was the flagrant foul? Yesterday's AP and Reuters reports laid it out: Anthropic and the Department of Defense butted heads over two specific red lines. Anthropic refused to budge on its prohibitions against using its AI for mass domestic surveillance and fully autonomous weapons. This wasn't some JV scrimmage; this was a battle for a starting spot on classified military networks, and the DoD wanted complete control of the playbook.
The Substitution: OpenAI Enters the Game
Industry insiders had been speculating about this possibility since OpenAI's prior actions. OpenAI signaled this max contract move back in January 2024 when they quietly scrubbed the phrase "military and warfare" from their usage policy. This indicated a willingness to pursue defense contracts, for example. So when OpenAI swooped in to grab the contract hours after Anthropic got benched, it wasn't a surprise substitution; it was the payoff of a two-year strategy. They say their deal has similar safety principles, but they'd already paved the way for this. While their 2024 policy update removed a blanket ban on "military and warfare," the company stated it still prohibited using its tech to cause harm or develop weapons.
When Your True Shooting Percentage Doesn't Matter
Anthropic's CS was a zero, so they got cut. Anthropic just dropped Claude Opus 4.6 this month, and its new 'Agent Teams' features—which let the AI autonomously coordinate complex tasks—are exactly the kind of force multiplier the DoD claims it wants. But even that wasn't enough to save the deal. Technical capabilities were not the deciding factor; compliance was the key requirement.
The Hot Takes Are Flying
Some commentators are calling this a 'woke AI' issue, but they're missing the whole damn point. Some argue that Anthropic's stance reflects a commitment to ethical AI development, while others see it as a naive business decision. This wasn't about philosophy; it was about two specific clauses in a contract concerning domestic surveillance and autonomous weapons that Anthropic refused to ditch. The contract details were the deciding factor.
A New Kind of Double Standard?
And the double standard? That's the whole game. Anthropic gets ejected for standing firm on its safety clauses. OpenAI gets the max contract by including... what they *claim* are the same principles. Did OpenAI find a loophole, or did the DoD just prefer their brand of compliance? Given the available information, it is likely that the DoD favored a contractor more aligned with its objectives.
The Verdict: A Calculated Power Play
Ultimately, this decision demonstrates that this wasn't a debate over AI ethics. It was a power play. The Pentagon didn't like a contractor telling them what they could and couldn't do with their tech. They wanted "any lawful use," and Anthropic's red lines on mass surveillance and autonomous weapons were a non-starter. OpenAI, having already softened its stance on military use back in 2024, was ready to fit into their system. Crucially, OpenAI states its agreement includes the same red lines on mass domestic surveillance and autonomous weapons that Anthropic was fighting for. Anthropic's position was weakened because it attempted to dictate terms on a contract worth up to $200 million, and OpenAI capitalized. The 'principled stand' was a business error, and the real winner is the DoD, which now has a compliant AI partner and has sent a clear message to the rest of Silicon Valley: ...companies must align with DoD priorities to secure contracts.