Technology

OpenAI CEO Sam Altman responds to the Department of Defense’s confrontation

OpenAI has entered into an agreement with the US Department of Defense (DOW), providing its AI tools for military use in “decentralized environments.” Announcing the partnership on Saturday, ChatGPT’s developer says it includes safeguards that prevent the use of its technology in mass surveillance or autonomous weapons. However, the contract excerpts shared by OpenAI appear to leave significant loopholes.

SEE ALSO:

Anthropic’s Claude surpasses ChatGPT in the App Store

News of OpenAI’s partnership with DOW came just one day after President Donald Trump announced that the US government would no longer use technology from OpenAI rival Anthropic, including its AI model Claude. Posting about the Truth Social split, Trump was contradicting Anthropic’s insistence that DOW adheres to the company’s terms of service.

What services policies Trump opposes were revealed in a statement from Anthropic CEO Dario Amodei on Thursday. In it, he said the DOW wants Anthropic to remove safeguards against the use of its technology for US mass surveillance and fully AI-controlled weapons. Amodei said such use could be legal, however “this is because the law has not caught up with the rapidly growing power of AI.”

“[I]a small set of cases, we believe that AI can undermine, rather than protect, democratic values,” Amodei wrote. “Other uses are also outside the boundaries of what today’s technology can do safely and reliably.”

OpenAI’s goals are apparently beyond the Trump administration’s liking, with the company stepping in to provide the US military with AI technology in the Anthropic space. Yet despite this, OpenAI says its agreement with the DOW not only contains similar safeguards that prohibit the use of its technology in mass surveillance at home or the guidance of autonomous weapons, but adds a third: “There is no use of OpenAI technology in highly automated decisions (eg programs like ‘public debt’).”

“We maintain full understanding of our security stack, use the cloud, have OpenAI’s dedicated staff in place, and have strong contractual safeguards,” the OpenAI announcement read. “All of this adds up to the strongest protections available in US law.”

According to OpenAI, the restrictions are more effective than Anthropic’s because it will only provide DOW with its technology in the cloud, rather than embedding it directly in hardware. OpenAI staff will also be kept involved to see how DOW uses its technology. This will allegedly allow the company to have more oversight and control over its AI systems.

“We do not know why Anthropic was unable to reach this agreement, and we hope that they and many other labs will consider it,” OpenAI wrote.

However, part of the contract shared by OpenAI indicated that its technology would only be restricted from use in autonomous weapons or surveillance of US citizens where such use is illegal. In fact, the agreement appears to lay out the conditions under which OpenAI technology will be permitted for these purposes, such as where human control over weapons is not required by DOW policy or law.

“The Department of the Army may use the AI ​​System for all legitimate purposes, consistent with applicable law, operational requirements, and well-established security and oversight principles,” the contract reads, per OpenAI. “[A]any use of AI in autonomous and autonomous systems must be rigorously validated, validated, and tested to ensure that they work as intended in real-world environments before deployment. “

Responding to concerns in a post on LinkedIn, OpenAI’s head of national security partnerships Katrina Mulligan simply emphasized that its usage policies are not the only safeguards in place, reiterating its cloud deployment and the involvement of its employees.

“[The DOW’s] position was, build the model the way you want, deny whatever requests you want, just don’t try to control our operating decisions with usage policies,” Mulligan wrote.

Still, doubts remain about the effectiveness of these virtual security measures, especially considering OpenAI’s reluctance to take an ethical stance.

Sam Altman talks about the OpenAI deal with the Department of Defense

OpenAI CEO Sam Altman held a Q&A on X in an attempt to ease user concerns about the DOW deal, with little success. Acknowledging that the deal was “really rushed, and the optics don’t look good,” Altman said they hoped it would ease tensions between the DOW and the AI ​​industry.

“I think that a good relationship between the government and the companies developing this technology is important in the next few years,” Altman wrote.

The deal may have brought OpenAI and the US government closer together, but it seems to have simultaneously alienated regular ChatGPT users.

In response to a question about whether allowing all legal uses allows for mass surveillance, Altman shared a post by US Secretary of Defense Emil Michael in which he said “DoW does not monitor the domestic communications of US citizens (including commercial communications) and to do so would be illegal and very un-American.”

Unsurprisingly, few seem inclined to take the DOW’s name for it. In 2013, Edward Snowden, a whistleblower revealed the mass surveillance of US citizens by the DOW’s (then called the Department of Defense) National Security Agency (NSA). The system was found to be illegal, and included people’s phone records. Human Rights Watch also accused the then-Defense Department of searching US citizens without warrants in 2017.

“The government has already broken the law and patrolled illegally [sic] US citizens,” replied X user @bolts6629. “A milquetoast statement from an executive trustee with a reputation for lying ready?”

Altman said he would oppose using OpenAI technology for mass surveillance at home “because it violates the Constitution,” and expressed discomfort with the idea of ​​an amendment that would allow such use. However, some social media users are questioning the claim, noting that he has gone back on other promises in the past.

“Other things you said you wouldn’t do: skip OpenAI board, remove non-profit structure, put ads on ChatGPT,” commented @Laneless_.

In addition, OpenAI’s CEO also revealed that the company is reluctant to draw moral lines, preferring to abdicate responsibility and follow government directives rather than taking any kind of stand on its own.

“[W]they were not elected,” Altman wrote. “We have a democratic system where we choose our leaders. We know the technology and understand its limitations, but I think you should be afraid of a private company deciding what is right and wrong in the most important areas.”

“Following orders is not an excuse for misconduct,” replied @MagisterLudiX. “You may have strong red lines or you may see it as a transaction only, depending on the political context.”

“AI is a tool. Hard limit to it, limit like any other tool,” wrote @genericrohan. “It’s not about deciding what the military can do, it’s about limiting what the military can do for you.”

In response to the news of OpenAI and the DOW collaboration, many ChatGPT users are reportedly canceling their subscriptions to the AI ​​chatbot. Several instead turned to Anthropic’s AI chatbot Claude, which has dethroned ChatGPT as the most downloaded free app in the US Apple App Store.

“OpenAI just made a deal with the devil and lost this 2 year customer,” Reddit user r/boomroom11 posted on the r/ChatGPT subreddit. The post has over 26,000 upvotes at the time of writing. “The (originally non-profit) company that told us it existed to build AI safely for the benefit of humans is now taking contracts for the Pentagon. Sam Altman has decided that defense funding is more important than the mission the company was founded on.”


Disclosure: Ziff Davis, Mashable’s parent company, in April 2025 filed a lawsuit against OpenAI, alleging that it infringes Ziff Davis’s copyright in training and using its AI programs.



Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button