The Scoop
OpenAI’s announcement Monday that the US military will get access to ChatGPT came after months of deliberation over whether employees would accept the deployment, according to people briefed on the matter.
The chatbot will be offered through Genai.mil, a new program the Pentagon launched last month. The tricky part for OpenAI was that the Pentagon was asking to use its technology for “all lawful uses,” meaning the company couldn’t impose any restrictions on what it or its employees view as acceptable implementation, either for moral or technical reasons.
The “all lawful uses” clause has become a sticking point in negotiations between the Pentagon and Anthropic, which wants more control over how its technology is used. Anthropic leaders are concerned that the military might use the models in situations where the technology is unreliable or endanger lives.
The Pentagon rejected Anthropic’s requests for more control, according to people briefed on the matter, and the company’s Claude chatbot is still not available via Genai.mil. Earlier, Google and xAI agreed to the “all lawful use” clause and even removed some model-level restrictions.
OpenAI agreed to the contract, but is offering the same ChatGPT that non-military users can access. That means the standard guardrails placed on the model are still in place, and it could by default refuse some prohibited prompts. ChatGPT, unlike Claude, is not cleared for top secret use cases, which could create a de facto barrier to many military use cases.
OpenAI, Anthropic, Google, xAI, the Pentagon and the White House didn’t immediately respond to requests for comment.
In this article:
Step Back
In working with the US military, tech companies are forced into a delicate dance. Some employees fear that the AI models they build could be misused in combat, or used for purposes they morally oppose. Google’s decision to quickly agree to the military’s terms can now be used by competitors to recruit employees who may oppose military use.
But Anthropic’s moral stand, while popular with its employees, has drawn the ire of the Pentagon and the White House, Semafor has reported.
Still, some employees at OpenAI said they felt it was important that the company make its technology available to the military to avoid giving xAI’s Grok an advantage, according to one person familiar with the deliberations.
Reed’s view
Some technologists fear that putting AI in charge of weapons is a stepping stone to some kind of existential event for humanity. Ironically, the best argument against that fear came from Nate Soares, coauthor of the AI doomer book If Anyone Builds It, Everyone Dies, who told me on a panel last year that a superintelligent AI wouldn’t need conventional weapons to take out humanity. And the use of AI on the battlefield could actually save lives by removing humans from the equation.
Another concern is that the technology is powerful, but also flawed and unpredictable. It’s possible, for instance, that a military operation could rely on it too much and end up accidentally going after the wrong targets.
But AI is not a weapon itself. It’s a tool. It might not be immediately apparent how AI factors into any given military operation. That ambiguity adds to the fear factor. There are already stories swirling around Washington about which AI models might have been used during various military operations, or even in domestic surveillance operations.
Even if the Department of Defense allowed Anthropic and other AI companies to dictate exactly how their tools could be used during operations, the nature of the technology — the most general-purpose software ever built — means it would be difficult to assure it was never misused, or used in scenarios deemed problematic.
There’s also little chance any AI company would take responsibility for military outcomes that use its technology — it’s ultimately the humans in the military who will be held accountable. And so, it will probably be up to those with accountability who determine how the technology is used.


