Technology

Anthropic’s AI security policy has recently been changed for this reason

When Anthropic was launched years ago, the company wanted an industry-wide “race to the top” in artificial intelligence, instead of a race to the bottom in pursuit of customers and market dominance that could lead to catastrophic security risks.

So Anthropic adopted security terms and policies that it hoped competitors would also use. In some cases, companies, including Google and OpenAI, have done so, according to Anthropic. Still, Anthropic’s hopes didn’t “take off” as the company had hoped, according to a blog post published Tuesday.

The post announced that Anthropic, maker of AI chatbot Claude, is changing key security measures to meet what it sees as today’s challenges.

BREAKFUT:

Claude Apps: How Anthropic will integrate Slack, Canva, and more

In particular, Anthropic will no longer pause the development of a model if it could be considered dangerous; instead, it will consider the actions of its competitors and whether they are releasing models with similar capabilities. In the past, Anthropic has committed to protections that will reduce the overall risk of models, regardless of whether other AI developers have done the same.

“The policy landscape has shifted towards prioritizing AI competition and economic growth, while security-focused discussions have yet to achieve meaningful results at the government level,” the company wrote. “We remain convinced that effective government engagement on AI security is necessary and achievable, and we intend to continue to advance the discussion based on evidence, the interests of national security, economic competitiveness, and public trust. But this proves to be a long-term project – not something that happens naturally as AI becomes more powerful or exceeds certain limits.”

Although Anthropic said it intends to continue to lead in security, its latest decision reflects the rapid pace at which competitors are releasing new models.

Anthropic has also come under intense pressure this week from the US Department of Defense, which is pushing the company to allow the military to use its AI tools for any purpose, including mass surveillance or the deployment of autonomous weapons without human supervision.

Anthropic has yet to back away from those points in contract negotiations with the Department of Defense, which reportedly drew the ire of Defense Secretary Pete Hegseth, who threatened to cut the company’s relationship with the military, Axios reported.

Anthropic is participating in an AI pilot program to analyze military-related images, along with Google, OpenAI, and xAI, according to New York Times. Although Claude is the only chatbot working on classified government programs, a Pentagon official said Anthropic could be replaced by another company.


Disclosure: Ziff Davis, Mashable’s parent company, in April 2025 filed a lawsuit against OpenAI, alleging that it infringes Ziff Davis’s copyright in training and using its AI programs.

Articles
Artificial Intelligence Social Good

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button