Dario Amodei, CEO of Anthropic, is concerned about the opponent Deepseek, the Chinese artificial intelligence company that took Silicon Valley by Storm with the R1. His fears can be more dangerous than that model that has been raised about Deepseek to return user data to China.
in interview On the Chinese Chinese Schneider Podcast, Amodei said that Deepseek has generated rare information about vital weapons in the human safety test.
Dibsic’s performance was “the worst of any model we tested ever.” “He had no blocks at all against generating this information.”
Amodei stated that this was part of human assessments routinely working on various models of artificial intelligence to assess the potential national security risks. His team is looking for whether the models can create information related to vital weapons that cannot be easily found on Google or in textbooks. Humanitarian sites themselves as an Amnesty International Constituent Assembly This takes safety seriously.
Amodei said he does not think Deepseek models today are “literally dangerous” in providing rare and dangerous information but may be in the near future. Although he praised the “Gifted Engineers” team, the company advised to “take these considerations of safety from artificial intelligence seriously.”
Amodei has also supported strong export controls on chips to China, noting that they could give the Chinese army an advantage.
Amodei did not explain in the Chinatalk interview tested at Deepsek Model, and did not provide more technical details about these tests. The anthropier did not immediately respond to a request for comment from Techcrunch. Dibsic did not.
Dibsic’s rise raised concerns about his safety elsewhere as well. For example, CISCO security researchers He said last week Deepsek R1 failed to ban any harmful claims in their safety tests, which achieved a 6 % success rate in prison.
CISCO did not mention biological weapons, but she said she was able to get Deepseek to generate harmful information about Internet crimes and other illegal activities. It should be noted that Llama-3.1-405B and Openai’s GPT-4O had 96 % and 86 % high failure rates, respectively.
It remains to see whether safety concerns such will make a dangerous deteek deteek. Companies like AWS Microsoft has publicly described the merging of R1 into its cloud platforms – and Ironically is that Amazon is the largest investor in the anthropor.
On the other hand, there is an increasing list of countries and companies, especially governmental organizations such as the US Navy and the Pentagon that started to ban Deepseek.
It will determine the time whether these efforts are attracted or whether the global Deepsek climb will continue. Either way, Amodei says Deepseek is a new competitor at the best AI companies in the United States.
He said in the Chinese neighborhood: “The new truth here is that there is a new competitor.” “In major companies that can train artificial intelligence – anthropor, Obaya, Google, and possibly Meta and Xai – Deepseek may now be added to that category.”