Anthropic Claude advanced AI model with restricted access announcement

Anthropic unveils advanced Claude model with restricted access

The race to build more powerful artificial intelligence is no longer defined by performance alone,it is increasingly shaped by how these systems are controlled and deployed. As advanced models begin to interact with critical infrastructure, cybersecurity, and financial systems, concerns around safety and misuse are becoming central to development decisions.

This reality is reflected in a new development from Anthropic, which has introduced Claude Mythos Preview ,its most advanced model so far, while choosing not to release it for public use due to security risks tied to its capabilities.

To manage these risks, Anthropic introduced the  initiative, known as Project Glasswing, which brings together a coalition of major technology and infrastructure firms, including Amazon Web Services, Apple, Cisco, Google, Microsoft, and Nvidia, alongside financial institutions such as JPMorgan Chase. Project Glasswing as a controlled deployment initiative will be  focused on identifying and fixing critical software vulnerabilities before they can be exploited. Rather than releasing the system openly, the company is directing its use toward strengthening global digital infrastructure.

Under this programme, a group of major technology and infrastructure companies is working alongside Anthropic to test and apply the model in secure environments. Participants include firms such as Amazon Web Services, Apple, Cisco, Google, Microsoft, and Nvidia, along with financial institutions like JPMorgan Chase. The programme also includes more than 40 additional organisations responsible for maintaining critical software systems. To support the effort, Anthropic has allocated up to $100 million in usage credits and committed additional funding to strengthen open-source security projects.

When Capability Becomes a Security Risk

Claude Mythos Preview has demonstrated the ability to identify vulnerabilities across major operating systems and web infrastructure, including flaws that had gone undetected for decades. In one case, it uncovered a 27-year-old issue in OpenBSD. In another, it autonomously identified and exploited a long-standing vulnerability in FreeBSD, achieving full system control without human intervention beyond the initial prompt.

What makes this more significant is how these abilities developed.According to Anthropic, the model was not trained specifically for cybersecurity. Its performance comes from improvements in reasoning, coding, and autonomy. The same strengths that help it find and fix problems also allow it to exploit them, which increases the risk.

This dual use is at the centre of Anthropic’s decision. Researchers at the company say the model can link several vulnerabilities together to create more advanced attack paths. Tasks that would normally require skilled security experts can now be done faster and at a larger scale. That raises serious concerns about misuse if the system is made widely available.

The risks are not just theoretical. As AI systems improve, there is growing evidence that they can be used in real cyber operations. This has led to closer attention from governments and security agencies, especially around how such tools could affect both cyber defence and attacks.

Project Glasswing is designed to reduce these risks. By limiting access to trusted organisations, vulnerabilities found by the model can be fixed before they are exposed more widely. This is especially important for open-source software, which supports much of the internet but often lacks enough security resources. Through funding and access, groups linked to organisations like the Linux Foundation are working to strengthen these systems.

Anthropic has said it may expand access in the future, but only after putting stronger safeguards in place. The company plans to test these protections using less risky models before considering wider use of Mythos Preview.

This approach is not unique. Companies such as OpenAI are also treating advanced models as high-risk in certain areas like cybersecurity. Across the industry, there is a clear pattern: the most powerful systems are being released carefully, not openly.

Claude Mythos Preview shows where the industry is heading. The challenge is no longer just building better AI, but making sure it is used in a safe and controlled way.

Leave a Reply