The artificial intelligence model Claude, developed by Anthropic, played an active role in the military operation that captured Venezuelan dictator Nicolás Maduro last month, according to multiple reports citing people familiar with the matter. The revelation marks the first confirmed use of a commercial AI model in a classified Pentagon combat operation and has ignited a behind-the-scenes battle over how far Silicon Valley’s AI safety guardrails should extend into America’s most sensitive military missions.
U.S. special operations forces raided Caracas in January, bombing several sites and extracting Maduro and his wife to face narcotics charges in New York. The operation resulted in no American casualties but left dozens of Venezuelan and Cuban security personnel dead. Claude was deployed during the active mission itself, not merely in preparatory planning phases, processing real-time intelligence data as American forces executed the raid.
The AI system reached the battlefield through Anthropic’s partnership with Palantir Technologies, the data analytics firm whose platforms have become deeply embedded in Defense Department and federal law enforcement operations. Claude became the first AI model from a major commercial developer cleared for use on the Pentagon’s classified networks, where the military conducts its most sensitive work, from weapons testing to live operational communications.
That pioneering status now threatens to become a liability for Anthropic. Following the disclosure of Claude’s role in the Maduro operation, a senior Trump administration official told Axios the Pentagon is reevaluating its partnership with the company. The official’s account suggested Anthropic had called the Department of War to ask whether Claude was used in the raid, a move that “caused real concerns across the Department of War indicating that they might not approve if it was.”
Anthropic denied making such a call. The company’s usage policies explicitly prohibit Claude from being deployed “to facilitate violence, develop weapons or conduct surveillance.” Those restrictions reflect Anthropic’s public positioning as the safety-conscious alternative in the AI industry. CEO Dario Amodei has repeatedly warned of existential dangers posed by unconstrained artificial intelligence. Just this week, the head of Anthropic’s Safeguards Research Team resigned with what he described as a warning that “the world is in peril.” Days later, the company committed $20 million to political advocacy backing robust AI regulation.
But the company is simultaneously negotiating with the Pentagon over whether to loosen those very restrictions. The discussions reportedly center on whether Claude can be used for autonomous weapons targeting and domestic surveillance. The standoff has stalled a contract worth up to $200 million that was awarded last summer. War Secretary Pete Hegseth has made his position clear, vowing not to use AI models that “won’t allow you to fight wars.”
“The future of American warfare is here, and it’s spelled AI,” Hegseth said in December. “As technologies advance, so do our adversaries. But here at the War Department, we are not sitting idly by.”
An Anthropic spokesperson told Fox News Digital the company “cannot comment on whether Claude, or any other AI model, was used for any specific operation, classified or otherwise.” The spokesperson added that “any use of Claude — whether in the private sector or across government — is required to comply with our Usage Policies, which govern how Claude can be deployed. We work closely with our partners to ensure compliance.”
A source familiar with the matter told Fox News that Anthropic has visibility into both classified and unclassified usage and maintains confidence that all deployments have complied with the company’s policies and its partners’ compliance frameworks. How the company squares that assurance with usage policies forbidding facilitation of violence remains unclear, particularly given that the operation left dozens dead.
The Department of War declined to comment when reached by Fox News.
The precise role Claude played during the raid has not been disclosed. The military has previously used AI models to analyze satellite imagery and process intelligence in real-time. Such capabilities are prized by commanders operating in chaotic environments where rapid data synthesis can mean the difference between mission success and failure. AI tools can perform tasks ranging from document summarization to controlling autonomous drones.
The controversy arrives at a moment when multiple AI companies are navigating the tension between commercial relationships with the Pentagon and self-imposed ethical boundaries. OpenAI, Google, and Elon Musk’s xAI have all secured deals granting Pentagon access to their models, often with fewer restrictions than those applied to civilian users. Discussions are ongoing between those companies and the Pentagon about deploying their tools on classified systems.
Only Anthropic’s Claude currently operates in that classified space, making it uniquely positioned but also uniquely exposed. The company raised $30 billion in its latest funding round and is now valued at $380 billion. Whether it can maintain that valuation while navigating incompatible demands from safety advocates and the defense establishment may determine whether other AI firms follow its path into classified military applications or seek alternative arrangements.
The Trump administration has made AI development a strategic priority. The successful Maduro operation demonstrated both the technology’s potential battlefield value and the complications that arise when companies marketing themselves on safety principles discover their products being used in lethal military operations. President Trump recounted the raid’s success during an event at Fort Bragg, North Carolina, describing how American forces “blasted through steel doors like it was papier-mache” to capture the Venezuelan leader.
Whether Anthropic will continue providing the AI tools that make such operations possible, or whether the Pentagon will find providers less concerned with ethical constraints, remains an open question. What’s no longer in question is that commercial artificial intelligence has crossed the threshold from supporting functions into active combat operations. The age of AI warfare isn’t approaching. It has arrived.



