
The stand-off between Anthropic and the Pentagon is a test of who controls the most powerful technology in the world. The outcome will shape everything from U.S. national security to the development of AI. Additionally, it brings the world closer to AI-enabled disaster, as long-feared safety risks are becoming realities.
Experts warn the world is hurdling toward AI-mageddon.
The Pentagon Fall Out With Anthropic
The U.S. government demands the use of Anthropic models for all legal purposes. CEO Dario Amodei refuses on two grounds, as he is concerned that the technology will be used to analyze the digital footprints of U.S. citizens. Current laws do not govern this form of surveillance.
This technology is already being used by Trump’s Immigration and Customs Enforcement to analyze vast amounts of data in an effort to speed up deportations.
The second concern is the use of autonomous weapons. The technology is still unpredictable, but extraordinarily powerful, and immature; there are risks that it could go rogue.
The Trump administration responded to the rejection by branding Anthropic as “left-wing nut jobs” who want to “dictate” how the U.S.’s “great military fights and wins wars.”
The federal government was given six months to end its contract with the AI tech company. Pete Hegseth declared it a “supply-chain risk.”
According to Under Secretary of Defense for Research and Engineering Emil Michael, AI contracts signed during the Biden administration were structured so that, if a military operator violated the provider’s terms of service, the model could theoretically “just stop in the middle of an operation.”
The only AI model available to the Department of Defense at the time was Anthropic’s Claude.
Michael stated, “What we’re not going to do is let any one company dictate a new set of policies above and beyond what Congress has passed.”
Michael said an AI company began asking questions about whether its model was used in what was described as “one of the most successful military operations in recent memory.”
Several reports claim the U.S. government raid in the January capture of Venezuelan President Nicolas Maduro was assisted by Anthropic technology. Additionally, it was used in the strikes on Iran. This information came hours after Defense Secretary Pete Hegseth declared Anthropic a “supply-chain risk” to national security, which means it cannot do business with U.S. government agencies.
Anthropic Concerns
In December, Anthropic’s Claude chatbot was instructed to break into records held by Mexico’s government, as part of a supposed security test; it found and exploited vulnerabilities and stole 150GB of taxpayer details, voter records, and employee credentials.
If this can be done, researchers are concerned that AI could be used to develop analogues of the toxin ricin, which cannot be traced using conventional methods.
A decent portion of Anthropic’s code is written by AI; it would be difficult to monitor if the technology is ignoring human instructions. Many models demonstrate a degree of “situational awareness,” meaning that when asked to delete themselves, they reason that the situation is a test and refuse.
Instead of laying out a clear list of rules on how the technology will be used, the Trump administration is making an example of a company that dared raise concerns, even if it means harming homegrown innovation.
What Is Next?
Where the U.S. leads, the world is sure to follow. AI creators have invested hundreds of billions of dollars in a race to the next upgrade. This increases the pressure to quickly upgrade to turn a profit and often results in watering down safety protocols to do so.
During the recent India AI summit, most governments were more willing to discuss fair access to the technology than safety.
This explains why experts are predicting AI-mageddon as the use of the technology leads to a disaster that causes either huge economic damage or loss of life.
Sources:
The Economist: AI danger gets real
India Times: Senior Pentagon official breaks silence on Anthroic-Department of War dispute, says “I had a holy, holy…’
Featured Image Courtesy of TORLEY’s Flickr Page – Creative Commons License
Discover more from Guardian Liberty Voice
Subscribe to get the latest posts sent to your email.

