Sam Altman, CEO of OpenAI, announced late Friday that his company has reached an agreement that will allow the Department of Defense to use its AI models in the department’s classified network.
It comes on the heels of a high-profile standoff between the Department of Defense — also known under the Trump administration as the War Department — and OpenAI rival Anthropic. The Pentagon has pushed AI companies, including Anthropic, to allow their models to be used “for all lawful purposes,” while Anthropic has sought to draw a red line around mass domestic surveillance and fully autonomous weapons.
in A lengthy statement was issued on ThursdayAnthropic CEO Dario Amodei said the company “has never raised any objections to specific military operations or attempted to limit the use of our technology in For this “In a way,” he said, but he said that “in a narrow set of cases, we think AI is capable of undermining democratic values, rather than defending them.”
More than 60 OpenAI employees and 300 Google employees signed an open letter this week asking their employers to support Anthropic’s position.
After Anthropic and the Pentagon failed to reach an agreement, President Donald Trump criticized “leftist jobs at Anthropic” in a social media post that also directed federal agencies to stop using the company’s products after a six-month phase-out period.
in Separate functionDefense Secretary Pete Hegseth claimed that Anthropic was trying to “seize veto power over operational decisions of the United States Army.” Hegseth also said he classifies Anthropic as a supply chain risk: “As of now, no contractor, supplier, or partner that does business with the U.S. military may conduct any business activity with Anthropic.”
Friday, Anthropy said She “has not yet received direct communication from the War Department or the White House regarding the status of our negotiations,” but insisted that she “will challenge any supply chain risk designation in court.”
TechCrunch event
Boston, MA
|
June 9, 2026
Surprisingly, Altman He claimed in a post on X OpenAI’s new defense contract includes safeguards that address the same issues that have become a flashpoint for Anthropic.
“Two of our most important safety principles are the prohibition of domestic mass surveillance and human responsibility for the use of force, including autonomous weapons systems,” Altman said. “The Department of Labor agrees to these principles, reflects them in law and policy, and places them in our agreement.”
OpenAI will “build in technical safeguards to ensure that our models behave as they should, which is what the Department of Labor also wanted,” Altman said, and will deploy engineers with the Pentagon “to help with our models and to ensure their integrity.”
“We are asking the Department of Labor to offer these same terms to all AI companies, which we believe everyone should be willing to accept,” Altman added. “We have expressed our strong desire to see matters settle beyond legal and governmental action and reach reasonable agreements.”
Fortune’s Sharon Goldman reports Altman told OpenAI employees in an all-hands meeting that the government would allow the company to build its own “security stack” to prevent abuse, and that “if a model refuses to do a task, the government will not force OpenAI to make it do that task.”
Altman’s post came shortly before news emerged that the US and Israeli governments were involved They started bombing IranWith Trump’s call to overthrow the Iranian government.









