On April 15, Politico reported that U.S. federal agencies and government officials are quietly circumventing President Trump’s ban on collaborations with AI startup Anthropic. The attention and concerns surrounding Anthropic’s powerful new AI model, Claude Mythos, continue to escalate.

Last week, Anthropic released a highly sophisticated new model, Claude Mythos, which has both impressed and worried researchers because it can detect critical software vulnerabilities that even the most brilliant human experts cannot identify.
A former senior U.S. technology official with direct knowledge of the discussions revealed that in recent days, staff from at least two major federal agencies have contacted Anthropic expressing interest in integrating Claude Mythos into their cyber defense efforts.
Furthermore, according to four sources familiar with the matter, the Center for AI Standards and Innovation (CAISI), a division of the U.S. Department of Commerce responsible for assessing the potential risks and opportunities of AI models both domestically and internationally, is actively testing Mythos’s hacking capabilities. These four sources include a current and a former cybersecurity official, a former Trump administration official, and a former senior national security official.
According to three congressional aides working on AI policy, staff from at least three congressional committees held or requested briefings with Anthropic last week to gain a better understanding of Mythos’s cyber-scanning capabilities.
While the federal government is still embroiled in legal battles with Anthropic, government agencies are still seeking access to Mythos, especially since CAISI is currently using the model. This highlights how the Trump administration’s attempts to block Anthropic are being carefully circumvented by government officials hoping to leverage its new models to enhance U.S. cyber capabilities.
In late February, after Anthropic CEO Dario Amodei explicitly refused to allow the Pentagon to use the company’s models for autonomous lethal attacks or mass surveillance of the American public, Trump and Defense Secretary Hergses instructed all federal agencies to cease using Anthropic’s technology. Last month, Hergses formally designated Anthropic a supply chain risk company—an unprecedented move against a U.S. company that effectively bans its AI models from use in Department of Defense contract projects.
“Ironically, the U.S. government attempted to ban its own agencies from using Anthropic’s products. Yet, just weeks later, Anthropic launched this revolutionary product, crucial for cybersecurity and therefore of paramount national security importance,” said Charlie Bullock, a lawyer and senior fellow at the think tank, the Institute for Law and AI.
As of press time, the U.S. Department of Defense declined to comment.