You know the AI landscape has entered truly uncharted territory when Steve Bannon and Susan Rice are signing their names to the same document. It sounds like the setup to a political satire, but in March 2026, this became reality with the release of the "Pro-Human Declaration."
While this bipartisan coalition is trying to pump the brakes on unchecked AI development, a much more tangible collision is happening behind closed doors in Washington. The debate over AI safety has moved from theoretical academic papers to hardline contract negotiations, and the fallout is splitting the industry in two. If you’ve been wondering why your news feed is suddenly full of debates about "lawful use" and "off-switches," you aren’t alone. Let’s unpack the chaotic week that redefined the relationship between Silicon Valley and the State.
What exactly is the ‘Pro-Human Declaration’?
Think of this as a desperate attempt to find common ground before the train leaves the station. Organized by the Future of Life Institute, the "Pro-Human Declaration" was finalized in early March 2026. It represents a pivot from the usual tech-heavy warnings we see from engineers to a broad political coalition. When you have signatories ranging from populist firebrand Steve Bannon to former Obama official Susan Rice and retired Admiral Mike Mullen, you know the anxiety about AI autonomy is cutting across every traditional party line.
The Declaration outlines five non-negotiable pillars for AI governance. The most critical among them? Human Control—specifically, the requirement for mandatory physical "off-switches" that cannot be overridden by software. It also calls for a ban on reckless architectures (like self-replicating code), avoiding the concentration of power in too few hands, and strict legal accountability for AI developers.
Perhaps the most aggressive stance in the document is the call for a total prohibition on superintelligence development until there is a scientific consensus that it can be done safely. It’s a bold demand, but as we’re seeing with the Pentagon, political declarations often struggle to survive contact with military reality.
Why is the Pentagon freezing out Anthropic?
While the politicians were drafting declarations, a massive rift opened up between the Department of Defense and Anthropic, the makers of Claude. This isn’t just a disagreement over terms; it’s a fundamental clash of values.
According to reports, the standoff escalated in late February 2026. Defense Secretary Pete Hegseth drew a line in the sand, demanding that AI contracts include language permitting "all lawful use" of the models. In plain English? The Pentagon wants to use AI for whatever legal military operations it sees fit, without a private company’s Terms of Service getting in the way.
Anthropic CEO Dario Amodei refused. The company stood firm on its guardrails against autonomous weapons and mass surveillance. The government’s response was swift and brutal. The Pentagon designated Anthropic a "supply chain risk," and President Trump reportedly ordered federal agencies to cease using Claude entirely.
The catalyst for this blowout appears to be a specific incident in January 2026. Reports indicate that Anthropic’s technology was used unauthorized—via Palantir as an intermediary—during a raid to capture Venezuelan President Nicolás Maduro. The friction caused by that event seems to have been the final straw for Defense officials tired of being told "no" by software guardrails.
How are OpenAI and xAI reacting to the defense demands?
Nature abhors a vacuum, and so do government contractors. As Anthropic is being pushed out of the lucrative Business-to-Government (B2G) sector, its primary rivals are moving in. Both OpenAI and xAI have reportedly agreed to the Pentagon’s "all lawful use" terms. This positions them to capture billions in defense contracts that might have otherwise gone to Anthropic.
This alignment isn’t without internal friction, however. We’ve seen reports that OpenAI’s robotics lead resigned in protest over these new defense deals, signaling that the culture war inside these labs is far from over. But from a business perspective, the market is bifurcating: you have "state-aligned" AI willing to serve national security interests without question, and "independent" AI trying to maintain ethical autonomy.
The Bottom Line
We are witnessing the end of the "self-regulation" era and the beginning of the "state alignment" era. By designating Anthropic a supply chain risk, the U.S. government has made it clear that it views AI not just as software, but as critical munitions that must be under state control. Anthropic is taking a massive gamble that consumer trust in an "ethical" AI is worth more than the guaranteed billions of the military-industrial complex. Meanwhile, the "Pro-Human Declaration" proves that while political rivals can agree on the dangers of AI, they are currently powerless to stop the military’s integration of it. The winners here are OpenAI and xAI, who have effectively become the new defense primes, while Anthropic retreats to the consumer sector to see if "trust" is a viable business model.