A security problem in the Chinese artificial intelligence company Deepseek exposed more than a million sensitive internal data lines, including user chat stories, API secrets and Backend operational details, according to published research Wednesday by the Cloud Wiz security company.
The exhibition, Discovered earlier this monthstems from a clickhouse database accessible to the public linked to Deepseek systems. The database – hosted on two Deepseek sub -domains – required no authentication, allowing unlimited access to internal newspapers dating from January 6. Deepseek, who sent shock waves via the technological industry because of its profitable Deepseek-R1 reasoning model, secured the profitable Deepseek-R1 model, secured the database a few hours after being informed by the Researchers.
WIZ researchers identified vulnerability during routine recognition of Deepseek Internet oriented assets. Two non -standard ports (8123 and 9000) led to an exposed clickhouse database; An open source database management system which is optimized for making rapid analytical requests on large data sets. From there, Wiz researchers organized arbitrary SQL queries, which have drawn information related to:
- Cat stories in clear between users and Deepseek AI systems
- API keys and cryptographic secrets
- Server repertoire structures and operational metadata
- References to internal API termination points
Researchers say that attackers could theoretically run similar orders to extract files directly from Deepseek servers – potentially leading to climbing privileges or business spying.
The rapid rise of Deepseek in the space of artificial intelligence has led to a meticulous examination of its safety practices. Earlier this week, the company said it had trouble recording new users due to “large -scale malicious attacks” against its services.
In addition, the Israeli intelligence company on cybersecurity threats, Kela, said that although R1 has similarities with the Openai Chatppt, “it is much more vulnerable” to be jailbreake.
“The team of Kela’s IA Red was able to jail the model in a wide range of scenarios, which allows it to generate malicious outings, such as the development of ransomware, the manufacture of sensitive content and detailed instructions to create toxins and explosive devices, “Kela’s researchers said in a blog Monday.
Wiz noted in its blog that the frantic pace of growth in AI space should push companies to develop technology to further focus on safety practices before pushing their products on the market.
“The world has never seen any technology adopted at the rhythm of the AI,” wrote the company. “Many AI companies quickly transformed into critical infrastructure providers without security executives who generally accompany such widespread adoptions. While AI is deeply interested in companies around the world, industry must recognize the risk of sensitive data management and apply security practices equally with those required for public cloud suppliers and the main infrastructure providers. »»
Deepseek did not respond to the request for Cyberscoop comments.