Employees trying to use a corporate device to access the very popular artificial intelligence application of Chinese startup Deepseek could inadvertently expose their organization to threats such as cyberespionage, experts warned.
A major red flag, they say, The conditions of use of Deepseekwhich stipulates that user data is stored on servers in China and governed under Chinese law, which obliges cooperation with the country’s intelligence agencies.
The Chinese government has long been Accused of engaging in spy campaigns Advance objectives such as the theft of the intellectual property of Western organizations or the collection of geopolitical intelligence. He has always denied allegations.
“China is very good in the mining of data,” said Andrew Grealy, head of Armis Labs, a division of the Cybersecurity Company based in San Francisco, California, Armis Inc., in an interview. “Everything in the teraoctes is not a problem for them.”
In depth Attracted global attention after having published an open source AI model which, according to him, was built at low cost compared to American competitors like Chatgpt.
The news shook the world of technology last week, which aroused questions about America’s ability to maintain a position of domination of AI on the world scene. President Donald Trump described the development of “awakening”.
In the middle of the frenzy, Microsoft announced that it was Deepseek Last AI model available on “A Confidence, evolving and lending company platform. “Other technology giants, including Amazon web services,, have made similar movements.
In the meantime, The White House press secretary Karoline Leavitt said last week that US officials were examining the national security implications for Deepseek application, a chatbot ai. Italy and Taiwan have prohibited it.
In addition, the application was blocked by hundreds of armis customers last week because the tool was largely publicized and quickly gained popularity, according to Grealy.
A similar scheme was observed last week by Netskope, a cybersecurity company whose headquarters are Santa Clara, California, Ray Canzana, director of the Netkope threat laboratories division, told CFO Dive.
“We have seen almost half of our customers in the world trying Deepseek, and the other half prevented their users from trying it,” he said.
Canzanese said that some Netskope customers automatically block unprecedented applications.
“The risk is that your employees will trigger the application and start putting sensitive data-customer data, source code, regulated data, intellectual property,” he said. “This is the risk of depth, it is really the risk with all these generator AI applications.”
In addition to questions related to the Chinese government, Deepseek aroused other concerns.
The protections of the application against data leaks as well as hallucinations – where an AI model affirms inaccurate declarations – are “especially weak”, according to Ophir Dror, co -founder of the Cybersecurity Company Lasso security.
“We too Wider safety risks observed Beyond his chain of origin and supply, including suspicious behavior that could constitute a threat to organizational organizations and agencies, “he said in an email. “Given these results, we strongly advise you to use these models in critical workflows or share sensitive information with them.”
The New York -based cybersecurity company, Wiz said last week that it had discovered that Deepseek had accidentally left over a million data lines Available unsecured. The database contained a “large volume of chat history, backend data and sensitive information,” said Gal Nagli, Wiz security researcher, in a blog article at the time.
Another survey on security, conducted by Cisco, revealed that the Deepseek IA model has shown a 100% attack success rateNot having blocked a single harmful prompt.
“”It is very tempting to move to the use of Deepseek, but tHere are a lot of risks involved, ” Melissa Ruzzi, director of AI of the security company Appomni, said in an interview.
A depth spokesperson could not be joined immediately to comment.
Publisher’s note: This story has been updated with the comments of Lasso Security.