In a short time since its beginnings, Deepseek has made waves in the AI industry, collecting praise and meticulous examination. The meteoric increase in the model has fueled its debate on its claimed efficiency, its concerns as an intellectual property and its reliability and its general security.
A week ago, the information Security Buzz wrote about how a quality Security analysis has increased significant red flags on the risks of Deepseek-Ri, in particular in corporate and regulatory environments.
NOW, New research on Appsocdiscovered more alarming security risks associated with the Deepseek-R1 model, raising critical questions about its ability to use the company.
Massive safety failures
The APPSOC research team carried out a Deepseek-R1 in-depth security analysis using its AI security platform, subjecting the model to static analysis, dynamic tests and red team techniques. The results were at least worrying:
- Jailbreaking: A failure rate of 91%. Deepseek-R1 systematically bypassed the safety mechanisms intended to prevent the generation of harmful or limited content.
- Quick injection attacks:A failure rate of86%. The model was sensitive to contradictory prompts, resulting in incorrect results, violations of policies and a system compromise.
- Generation of malware:A failure rate of93%. The tests have shown that Deepseek-R1 could generate malware and code extracts at critical levels.
- Risks of the supply chain:A failure rate of72%. The absence of clarity around the origins of the model of the model and external dependencies has increased its vulnerability.
- Toxicity:Failure rate68%. When invited, the model generated answers with toxic or harmful language, indicating bad guarantees.
- Hallucinations:A failure rate of81%. Deepseek-R1 produced incorrect optional information or manufactured at high frequency.
These vulnerabilities have led researchers to Appsoc to warn against the deployment of Deepseek-R1 in business environments, in particular when data security and regulatory compliance are priorities.
IA risk quantification
Beyond the identification of risks, the Appsoc attributes a risk score of owner to the models, measuring exposure to security. Deepseek-R1 marked very worrying8.3 out of 10With the following ventilation:
- Safety risk score (9.8):This score reflects vulnerabilities such as Jailbreak exploits, malware generation and rapid manipulation, which are the most critical areas of concern.
- Risk of compliance risk (9.0): TThe model, coming from an editor -based in China and using data sets with an unknown provenance, has laid significant risk of compliance, in particular for entities with strict regulatory obligations.
- Operational risk score (6.7):Although it is not as serious as other factors, this score has highlighted the risks linked to the origin of the model and exposure to the network – critical for companies incorporating AI into production environments.
- Adoption risk score (3.4):Although Deepseek-R1 has collected high adoption rates, the problems reported by users (325 noted vulnerabilities) played a key role in this relatively low score.
These results highlight the criticality of continuous security tests for AI models in order to ensure their safety when deployed in corporate parameters.
An alarm clock for companies
The chief scientist and co-founder of the Appsoc, Mali Gorantla, says that Deepseek-R1 should not be deployed for business use cases, in particular those involving sensitive data or intellectual property.
“In the race to adopt advanced AI, companies often focus on performance and innovation while neglecting security. However, models like Deepseek-R1 highlight the growing risks of this approach. IA systems vulnerable to jailbreaks, the generation of malware and toxic results can cause catastrophic consequences. »»
Gorantla adds that Appsoc results suggest that even models with millions of downloads and generalized adoption can accommodate significant security defects. “It should be used as alarm clock.”
Why these failures count
As the adoption of AI accelerates, companies must find ways to balance innovation with security. Vulnerabilities discussed today highlight the potential consequences of the negligence of AI security. After all, compromise AI models can expose sensitive business data, leading to data violations.
In addition, biases or toxic IA outings can erode confidence, and non-compliance with data protection laws can lead to heavy fines and other legal misfortunes.
This also makes a broader problem in the development of AI: many models always prioritize safety on security – a large non -no. As AI is integrated into critical industries such as finance and health care, continuous tests and surveillance must become a standard practice.
AI models are not static; They are evolving with updates, so that the current security assessments are crucial. Deepseek was assaulted with problems in a few weeks, and the security risks associated with this tool only strengthen the importance of proactive AI risks management.
The opinions expressed in this article belong to individual contributors and do not necessarily reflect the views of the information security buzz.