Home / Tech / Is Anthropic limiting the release of Mythos to protect the internet — or Anthropic?

Is Anthropic limiting the release of Mythos to protect the internet — or Anthropic?

Spread the love

Anthropic said this week that it has restricted the release of its latest model, called Mythos, because it is too capable of discovering vulnerabilities in software that users around the world rely on.

Instead of unleashing the Mythos on the public, the Frontier Laboratory will do just that Post it With a range of large companies and organizations managing critical online infrastructure, from Amazon Web Services to JPMorgan Chase. OpenAI is It is said Consider a similar plan for your next cybersecurity tool. The ostensible idea is to allow these large organizations to outmaneuver bad actors who can leverage advanced LLM holders to hack into secure software.

But the “e” in the sentence above is an indication that there may be more to this release strategy than cybersecurity — or an exaggeration of the model’s capabilities.

Dan Lahav, CEO of the AI ​​Cybersecurity Lab irregularTechCrunch reported in March, before the release of Mythos, that while discovering vulnerabilities with AI tools is important, the specific value of any weakness to an attacker depends on many factors, including how they are used together.

“The question I always have is: Did they find something that could be exploited in a very meaningful way, either individually, or as part of a series?” Lahav said.

Anthropic says that Mythos is able to exploit vulnerabilities to a greater extent than its previous model, Opus. But it’s not clear whether Mythos is actually the ultimate model for cybersecurity models. Aisle, an AI cybersecurity startup, He said It was able to replicate much of what Anthropic says Mythos accomplished using smaller, open-weight models. The Aisle team believes that these results show that there is no single model for deep learning for cybersecurity, but instead depends on the task at hand.

See also  Epstein files release: How to consume them responsibly

Since Opus was already seen as a game-changer in cybersecurity, there’s another reason why frontier labs want to limit their releases to large enterprises: it creates a flywheel for large enterprise contracts, while making it difficult for competitors to copy their models using distillation, a technique that leverages frontier models to train new master’s degree holders on the cheap.

“This is a marketing cover for the fact that cutting-edge models are now restricted by institutional conventions and no longer available for small labs to extract,” says David Crawshaw, software engineer and CEO of startup exe.dev. Suggested In a post on social media. “By the time you and I are able to use Mythos, there will be a new, higher-quality version just for enterprises. This mill helps keep enterprise dollars flowing (which is most dollars) by moving distillers to second place,” Crawshaw said.

This analysis is consistent with what we see in the AI ​​ecosystem: a race between frontier labs that develop the largest, most capable models, and companies like Aisle that rely on multiple models and see open source LLMs, often from China and often claimed to have been developed through distillation, as a path to economic advantage.

Frontier Labs has taken a tougher stance on distilling this year, with Anthropic publicly exposing what it says are attempts by Chinese companies to copy its models. Three leading labs – Anthropic, Google and OpenAI – are cooperating to identify and ban distilleries. Bloomberg report.

Distillation poses a threat to the frontier laboratory business model because it eliminates the advantages conveyed by using massive amounts of capital to scale the business. Preventing distillation is therefore indeed a worthwhile endeavor, but the selective release approach to doing so also gives labs a way to differentiate their enterprise offerings as category becomes key to profitable dissemination.

See also  A timeline of the US semiconductor market in 2025

It remains to be seen whether Mythos or any new paradigm truly threatens cybersecurity, and careful deployment of the technology is a responsible way forward.

Anthropic did not respond to our questions about whether the decision was also related to distillation concerns at the time of publication, but the company may have found a clever way to protect the Internet and its feed.

Source link

Tagged: