OpenAI said on Feb 28 that the agreement it struck a day ago with the Pentagon to deploy technology on the US defence department’s classified network includes additional safeguards to protect its use cases.
US President Donald Trump on Feb 27 directed the government to stop working with Anthropic
Anthropic said it would challenge any risk designation in court.
Soon after, rival OpenAI, which is backed by Microsoft , Amazon, SoftBank and others, announced its own deal
“We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic’s,” OpenAI said on Feb 28.
The AI firm said that the contract with the Department of Defence, which the Trump administration has renamed the Department of War, enforces three red lines: OpenAI technology cannot be used for mass domestic surveillance, to direct autonomous weapons systems or for any high-stakes automated decisions.
“In our agreement, we protect our red lines through a more expansive, multi-layered approach. We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections,” OpenAI said.
The Pentagon signed agreements worth up to US$200 million (S$253 million) each with major AI labs in the past year, including Anthropic, OpenAI and Google.
The Pentagon is seeking to preserve all flexibility in defence and not be limited by warnings from the technology’s creators against powering weapons with unreliable AI.
OpenAI cautioned that any breach of its contract by the US government could trigger a termination, though it added, “We don’t expect that to happen.”
The company also said rival Anthropic should not be labelled a “supply chain risk,” noting, “We have made our position on this clear to the government.” REUTERS