OpenAI’s Classified Deal—What’s Concealed?

Close-up of a smartphone displaying the OpenAI logo against a vibrant digital background

OpenAI’s new classified Pentagon deal exposes a hard question conservatives have been asking for years: who really controls powerful tech—elected government accountable to voters, or insulated executives setting “red lines” behind closed doors?

Story Snapshot

  • OpenAI signed a classified agreement with the U.S. Department of War on March 1, 2026, after previously signaling support for Anthropic’s refusal to accept broad “lawful purposes” language.
  • The Trump administration ordered federal agencies to phase out Anthropic over six months after the firm declined the same kind of offer and was labeled a “supply chain risk.”
  • OpenAI says its contract includes three “red lines” barring surveillance, autonomous weapons, and fully automated “high-stakes” decisions, but the full text remains classified.
  • Internal dissent surfaced publicly, with OpenAI alignment researcher Leo Gao criticizing the safeguards as “windowdressing.”

What OpenAI Actually Agreed To—And What’s Still Hidden

OpenAI confirmed it signed a deal allowing its AI models to operate on classified military networks, and it later published a partial version of the agreement. The company says its approach relies on “layered safeguards,” combining legal constraints, usage policies, and technical limits. OpenAI also emphasized three non-negotiables: no surveillance, no autonomous weapons, and no automation of “high-stakes” decisions. Because the full contract is classified, outside verification of enforcement details remains limited.

Sam Altman defended the agreement in a public Q&A on X, admitting the process was rushed and that the optics “don’t look good.” OpenAI’s leadership argued that engaging—rather than walking away—reduces risk by setting boundaries in writing and by shaping how government users deploy the tools. That argument depends on practical enforcement: how violations are detected, what penalties exist, and whether the government can reinterpret terms under evolving legal authorities.

Anthropic’s Rejection, Trump’s Phase-Out Order, and the “Supply Chain Risk” Label

The immediate backdrop was Anthropic’s refusal to accept a broad “all lawful purposes” clause, citing concerns tied to mass surveillance and autonomous weapons. The Trump administration responded by ordering agencies to phase out Anthropic over six months, after the Department of War under Secretary Pete Hegseth designated the company a supply chain risk. The result was a dramatic real-time lesson for the AI sector: saying “no” to Washington can carry steep consequences.

OpenAI, for its part, publicly disagreed with the blacklisting and urged the Department of War to offer Anthropic the same terms OpenAI received. That request matters because it signals OpenAI wants a competitive field rather than a government-chosen winner. Still, the situation also shows the government has leverage over firms whose products are now treated as strategic infrastructure. For voters who prioritize accountability, that leverage can be either reassuring or alarming depending on how it’s used.

What Employees Are Saying—and Why Their Criticism Matters

Backlash didn’t come only from the outside. Fortune reported that OpenAI alignment researcher Leo Gao criticized the announced guardrails as “windowdressing,” arguing they may not meaningfully limit dangerous uses. Other reporting described broader employee unease after the company pivoted quickly from earlier messaging sympathetic to Anthropic’s refusal to signing its own classified deal. The key factual constraint is that the most detailed employee critique cited in this reporting traces largely to Gao’s public comments.

OpenAI also highlighted internal voices defending the structure of the agreement. Katrina Mulligan, OpenAI’s head of national security partnerships, argued the contract binds the work to legal requirements and blocks surveillance use cases. A legal analyst cited in Fortune raised a practical concern: contracts tied to “current law” can be tested only when the full language, oversight mechanisms, and real-world disputes become visible. With the contract classified, the public is asked to trust a framework it cannot fully inspect.

Constitutional Stakes: Oversight, Transparency, and Mission Creep

Conservatives don’t have to oppose national defense innovation to demand constitutional guardrails. The central issue is oversight: classified adoption of powerful AI can expand the government’s analytical reach without the transparency Americans expect in domestic governance. OpenAI insists its red lines prohibit surveillance, but debates around what counts as “surveillance,” what data is ingested, and how tools are used across agencies are where mission creep historically starts. Limited disclosure makes those debates harder.

At the same time, the Trump administration’s hard posture toward Anthropic reflects a different kind of power question: whether the federal government should pressure companies into compliance by threatening exclusion. Ross Douthat raised concerns about the precedent and the independence of AI firms in this kind of environment. If Washington can effectively punish ethical refusals, future “red lines” may become performative. The public interest is served when rules are clear, competitors are treated consistently, and Congress can scrutinize guardrails.

Sources:

OpenAI CEO Sam Altman Defends Decision to Strike Pentagon Deal Amid Backlash Against the ChatGPT Maker Following Anthropic Blacklisting

OpenAI, DoW, Anthropic, AI ethics

OpenAI CEO Sam Altman answers questions on new Pentagon deal

Our agreement with the Department of War