A leading United States intelligence agency is reportedly using a powerful artificial intelligence system developed by Anthropic, despite the company being placed on a federal blacklist over national security concerns. The development, first reported by Axios, has sparked debate within policy and technology circles about the growing dependence on advanced AI tools—even when their use conflicts with official restrictions.
At the center of the controversy is “Mythos,” an advanced AI model designed to identify software vulnerabilities, analyze large datasets, and assist in complex cybersecurity operations. According to the report, the National Security Agency(NSA) has continued to deploy the system in certain operational contexts, despite the U.S. Department of Defensedesignating Anthropic as a potential supply chain risk earlier this year.

The blacklist decision reportedly followed disagreements between Anthropic and defense officials over how its AI systems could be used. The company is said to have resisted requests to loosen internal safeguards that limit the deployment of its models in areas such as mass surveillance and autonomous military applications. In response, the Pentagon restricted its engagement with the firm, effectively barring it from new contracts and certain forms of collaboration.
Yet, the reported use of Mythos by the NSA suggests that operational needs may be overriding policy constraints. Intelligence officials are believed to value the system’s capabilities in detecting previously unknown vulnerabilities in widely used software platforms. In an era marked by escalating cyber threats from both state and non-state actors, such tools are increasingly seen as indispensable.
Cybersecurity experts note that systems like Mythos can rapidly scan millions of lines of code, identifying weaknesses that would take human analysts weeks or months to uncover. This ability has significant implications for national defense, particularly in safeguarding critical infrastructure such as energy grids, communication networks, and financial systems. At the same time, the technology’s potential for offensive use—such as identifying exploitable weaknesses in adversary systems—adds another layer of complexity to its deployment.
The apparent contradiction between the blacklist and continued usage reflects a broader tension within the U.S. government. On one hand, there is a push to impose stricter oversight on emerging AI technologies, especially those developed by private companies operating outside direct government control. On the other, there is a recognition that falling behind in AI capabilities could pose a strategic risk, particularly as rival nations accelerate their own development efforts.
Some analysts argue that the situation highlights the limitations of traditional procurement and regulatory frameworks when applied to rapidly evolving technologies. Unlike conventional defense contractors, AI companies often maintain strong ethical guidelines and technical restrictions on how their systems can be used. This can lead to friction when government agencies seek broader or more flexible applications.
The reported use of Mythos also raises legal and ethical questions. If a company is formally restricted due to perceived risks, critics ask, should its technology still be used in sensitive government operations? Others counter that such decisions must be evaluated in the context of national security imperatives, where the cost of not using the most advanced tools could be significantly higher.
Within Washington, the issue is likely to intensify ongoing discussions about how to balance innovation, security, and oversight. Lawmakers have already begun examining the role of private AI firms in defense and intelligence work, with some calling for clearer guidelines on procurement and usage. There are also concerns about transparency, particularly when agencies adopt technologies in ways that may not align with publicly stated policies.
Anthropic, for its part, has positioned itself as a company committed to building “aligned” AI systems with built-in safeguards. Its reluctance to modify these safeguards for certain government applications has been seen by some as principled, while others view it as a barrier to effective collaboration in high-stakes environments.

The reported actions of the NSA suggest that, regardless of official policy, the demand for cutting-edge AI capabilities remains strong. As cyber threats grow more sophisticated, intelligence agencies are under increasing pressure to adopt tools that can keep pace with adversaries. In such a landscape, the line between acceptable risk and operational necessity becomes increasingly blurred.
Looking ahead, the episode may serve as a case study in the challenges of governing transformative technologies. It underscores the need for more adaptive regulatory approaches that can accommodate both the risks and the strategic importance of AI. It also raises fundamental questions about who ultimately controls the use of powerful technologies—the companies that build them, or the governments that seek to deploy them.
For now, the reported use of Mythos despite the blacklist illustrates a reality that is becoming harder to ignore: in the race for technological advantage, policy frameworks are often struggling to keep up with practice.








