Anthropic co-founder confirms company briefed Trump administration on dangerous Mythos AI model
In a revealing interview at the Semafor World Economy summit, Anthropic co-founder Jack Clark confirmed that the AI company had briefed the Trump administration about its new Mythos model. The model, announced just last week, is considered so dangerous that it will not be released to the public, primarily due to its powerful cybersecurity capabilities.
Why Anthropic engaged with the government despite ongoing legal disputes
This confirmation comes at a time when Anthropic is simultaneously suing the U.S. government. In March, the company filed a lawsuit against Trump’s Department of Defense after the agency labeled Anthropic a supply-chain risk. The dispute stemmed from the Pentagon’s desire for unrestricted access to Anthropic’s AI systems for uses including mass surveillance and fully autonomous weapons—a deal that ultimately went to OpenAI instead.
However, Clark downplayed the significance of this conflict during his interview. He described the supply-chain risk designation as a “narrow contracting dispute” and emphasized that it should not overshadow the company’s commitment to national security. “Our position is the government has to know about this stuff,” Clark stated. “We have to find new ways for the government to partner with a private sector that is making things that are truly revolutionizing the economy.”
Mythos AI model: A cybersecurity powerhouse deemed too risky for public release
The Mythos model represents a significant leap in AI capabilities, particularly in the realm of cybersecurity. Its potential for both defensive and offensive applications made it a subject of intense interest for government agencies. Reports indicate that Trump officials were encouraging major banks—including JPMorgan Chase, Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley—to test the model.
Clark confirmed the briefings directly: “So absolutely, we talked to them about Mythos, and we’ll talk to them about the next models as well.” This transparency, he argued, is essential for balancing innovation with national security concerns.
What makes Mythos different from other AI models
Unlike many AI systems that focus on general-purpose tasks, Mythos was specifically designed for cybersecurity applications. Its capabilities are so advanced that Anthropic decided against a public release, fearing misuse by malicious actors. This decision aligns with the company’s broader philosophy of responsible AI development, even if it means forgoing commercial opportunities.
AI’s impact on employment: Clark offers a nuanced view
Beyond the Mythos model, Clark addressed broader questions about AI’s societal impact, particularly on employment. While Anthropic CEO Dario Amodei has warned that AI could bring unemployment to Depression-era levels, Clark offered a slightly different perspective. He explained that Amodei’s estimates are based on the belief that AI will become much more powerful than people expect, very quickly.
Clark, who leads a team of economists at Anthropic, noted that the company is currently seeing “some potential weakness in early graduate employment” across select industries. However, he emphasized that Anthropic is prepared for major employment shifts should they occur.
Advice for college students in the age of AI
When asked what majors students should pursue or avoid in light of AI’s impact, Clark offered broad but insightful advice. He suggested that the most valuable fields are those that “involve synthesis across a whole variety of subjects and analytical thinking about that.”
“That’s because what AI allows us to do is it allows you to have access to sort of an arbitrary amount of subject matter experts in different domains,” Clark explained. “But the really important thing is knowing the right questions to ask and having intuitions about what would be interesting if you collided different insights from many different disciplines.”
This advice underscores a key theme: as AI becomes more capable, human skills like critical thinking, interdisciplinary synthesis, and curiosity become even more valuable. For more on how AI is reshaping the workforce, check out our guide on navigating the AI job market in 2025.
The balancing act: National security, corporate interests, and public safety
The Anthropic case highlights the delicate balance that AI companies must strike. On one hand, they have a responsibility to ensure their technologies are not misused. On the other, they must engage with governments to address national security concerns. This tension is likely to intensify as AI capabilities continue to advance.
Clark’s confirmation that Anthropic briefed the Trump administration on Mythos—despite ongoing litigation—suggests that the company prioritizes national security over corporate disputes. Whether this approach will serve as a model for other AI companies remains to be seen. For a deeper look at similar cases, read our analysis of how AI companies partner with governments.
As the AI landscape evolves, one thing is clear: the conversation between Silicon Valley and Washington is only just beginning. The Mythos model may be too dangerous for public release, but its existence is already shaping the future of AI governance.