
“Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.”
That’s not a quote from Anthropic CEO Dario Amodei refusing to accede to the US Department of War’s request that it allow its Claude AI models for mass surveillance and perhaps more problematically “Fully autonomous weapons.” Instead, it comes from a 2017 Open Letter to the UN, co-signed by, among dozens of other AI and robotics leaders, Elon Musk, asking the global organization to ban autonomous weapons.
It’s a window into long-brewing concerns over the abuse and misuse of autonomous systems for warfare. It’s also likely, despite Musk’s closeness to the current Trump administration, that US Secretary of Defense (or War) Pete Hegseth has never read it.
Anthropic is now at risk of losing a $200M US Department of War contract, despite, as Amodei describes it, already working “proactively to deploy our models to the Department of War and the intelligence community.”
Amodei is by no means anti-defense or against the use of AI by the US government. In his letter explaining Anthropic’s decision, Amodei writes, “I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.”
However, what Hegseth has asked is for Anthropic to countermand its own “Constitution”, a set of principles and safety restrictions for the use and behavior of its AI models. The US Department of War basically wants Anthropic to remove the guardrails. Anthropic Constitution Principles, such as being “Broadly Safe” and “Broadly Ethical,” are in direct conflict with Hegseth’s demands that the AI be used for mass surveillance and for fully autonomous weapons.
Amodie makes it clear that his systems are not ready for any of this.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
“Today, frontier AI systems are simply not reliable enough to power fully autonomous weapons,” writes Amodei, adding, “Without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day.”
Armed and dangerous
These are not new concepts. Many in the tech industry have been pondering these issues for almost a decade (if not longer). Musk and the AI and robotics community raised the alarm in 2017 because we were already seeing AI-backed robot systems being used in questionable ways.
In 2016, a bomb disposal robot was used to kill a mass shooting suspect in Texas. Dallas PD put an explosive device on the robot’s arm, guided it to where the suspect was holed up, and then they detonated the explosive device and killed the suspect.
At the time, some saw it as an inflection point, and a concerning one at that. Episodes like that may or may not have triggered that 2017 letter to the UN.
Once this Pandora’s box is opened, it will be hard to close.
AI and Robotics companies in 2017
Keep in mind that this happened before the current generative and agentic AI revolution.
Amodei knows better than most the massive leaps foundational models are taking every few months and, as he makes clear in his letter, our rules and strategies for managing AI in these circumstances have already fallen behind their capabilities.
“AI-driven mass surveillance presents serious, novel risks to our fundamental liberties. To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI,” he wrote.
Essentially, with AI, we don’t know what we don’t know. Hegseth’s willingness to recklessly use powerful AI models in both surveillance and warfare indicates he has zero knowledge or interest in the past and even less understanding of the intricacies of these systems.
A very bad idea
I’ve yet to talk to a technologist, a roboticist, or someone within the AI community who thinks letting an AI (or an AI-powered robot) control or carry a weapon is a good idea.
Hegseth isn’t necessarily spelling out that scenario, but his requirement to remove the guardrails Anthropic has smartly put in place indicates to me that he doesn’t really care about repercussions and AI casualties. He’s focused on results, perhaps at any or all costs, including safety and liberty.
Amodei’s done the right thing here, basically calling Hegseth’s bluff. As the Anthropic CEO made clear, Claude AI is already being used in many Department of War systems. Pulling it out and retrofitting for another, perhaps less powerful and intelligent set of models might not be easy and probably won’t have the desired outcome of a system ready to carry out Hegseth’s bidding.
Clearer heads must prevail here. As the tech leaders and, yes, even Elon Musk, wrote in 2017, “Once this Pandora’s box is opened, it will be hard to close.”
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course, you can also follow TechRadar on YouTube and TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.







