The sources of the European Parliament and the European Council declared to CSIS that when the AI Act was drafted, their intention was that the end adjustment of a model did not immediately trigger regulatory obligations. Indeed, the rules for GPAI models are intended to apply ideally only to the upstream model, the basic line from which all the different applications of the IA value chain come.
What the AI law would trigger for a fine adjustment is only simple provisions of “value chain”. That is to say that the aerial tune just needs to increase the information already provided by the supplier of models upstream with all the modifications made. For example, if a law firm is displayed GPT-4 by forming it with thousands of jurisprudence and memories to build its own specialized “adapted to lawyers” application, it would not need to write a Complete set of detailed techniques Documentation, its own copyright policy and a summary of data protected by copyright. Instead, the law firm in question would only need to indicate on existing documentation the process it used to refine GPT-4 and the data sets it used (in this example , that containing the thousands of case law and memories).
Step 1: R1 is equivalent to a fine setting?
If the AI office confirms that distillation is a form of fine adjustment, especially if the AI office concludes that the other various R1 training techniques are all in the field of “fine adjustment”, then Deepseek would only have to complete the information to pass along the value chain, just like the law firm. The information and research documents published by Deepseek already seem to comply with this measure (although the data is incomplete if Openai’s claims are true).
Conversely, if the guidelines indicate that the combination of distillation and the other refining techniques used for R1 are so sophisticated that they have created a new model in its own right, the provisions of the AI Act for GPAI models will apply from August 2, August 2, 2025. To be more precise, the AI law indicates that the GPAI models already placed on the market before this date must “take The necessary measures in order to comply with the obligations by August 2, 2027 ”, or in two years.
Step 2: If R1 is a new model, can it be designated as a GPAI model at systemic risk?
At this stage, EU regulators will have to make another step to decide exactly which R1 provisions should comply. Given that the size of R1 is much lower than the threshold of the AI law of the EU of 10 ^ 25 flops, they could conclude that Deepseek only needs to respect the reference provisions for all GPAI models , that is to say technical documentation and copyright provisions (see above). If Deepseek models are considered open source by the interpretation described above, regulators could conclude that it would be largely exempt from most of these measures, with the exception of copyright. As explained above, this remains to be clarified.
In all cases, it is always possible that the Office of AI designates R1 as a GPAI model with systemic risk even if it is well below The threshold of 10 ^ 25 flops which would normally trigger the designation. The AI law provides indeed the possibility of a GPAI model below this calculation threshold to be designated as a model at systemic risk in any case, in the presence of a combination of other criteria (for example, The number of parameters, the size of the data and the number of recorded professional users).
If this designation occurs, Deepseek should set up an adequate assessment of models, risk assessment and mitigation measures, as well as cybersecurity measures. He could not escape them through the open source exemption, as this does not apply to systemic risk models. Failure to comply would likely lead to fines of up to 3% of the annual turnover of Deepseek (a figure generally similar to annual income) or to be restricted from the EU single market.
The operationalization of GPAI models is currently written in the so-called code of practice. Because Deepseek is not a participant in the drafting of the code, the companies of the American AI have an excellent opportunity to continue to engage in constructively in the drafting process, because it will allow them to shape the rules that Deepseek will have to follow a few months from a few months now.
Deepseek: New concern or opportunity for Europe? The different scenarios
The AI office will have to walk very carefully with the fine adjustment guidelines and the possible designation of Deepseek R1 as a Systemic GPAI model. On the one hand, Deepseek and its other similar replications or mini-models have shown that European companies have quite possible to compete and possibly outperform, the most advanced large-scale models using much less calculation and a cost fraction. This could open a brand new range of attractive opportunities. The AI office will have to navigate in the compromise between securing robust railings and the need to stimulate the European ECA ecosystem late.
Scenario 1: R1 is considered a simple fine adjustment.
If, as described above, R1 is considered to be a fine adjustment, European companies reproducing similar models with similar techniques will practically escape almost all the provisions of the AI Act. This could potentially pave the way for hundreds of startups that quickly become competitive with American giants such as Openai, Meta and Anthropic, which should rather comply with the highest obligations of GPAI obligations.
At the same time, the R1 and similar models of Deepseek around the world will escape the rules themselves, with only the GDPR to protect the citizens of the EU from harmful practices. However, the GDPR could in itself lead to a restriction on the EU scale to access to R1. This would provide EU companies even more space to compete, because they are better suited to navigate the rules of confidentiality and security of the block.
This global scenario could well sit with the clear change in concentration to competitiveness under the new EU legislative term, which takes place from 2024 to 2029. The European Commission has published a Competitiveness compase On January 29, a roadmap detailing its approach to innovation. The document provides for a key role in AI in improving the European Union industry, and it lists several political and legislative initiatives in this regard. In the words of the EU commissioner for the sovereignty of henna virkkunen, “the EU must become a real continent of AI.” This scenario is therefore perhaps the most desirable for EU companies, although perhaps the least desirable for us.
Scenario 2: R1 is considered a GPAI model.
If R1 is considered to be a full-fledged GPAI model (triggering the basic level of obligations), and perhaps a systemic risk GPAI model, it will have to comply with the highest set of the requirements of the law on ‘IA for GPAI models. Similar models can always flourish in Europe, but they will also have to follow the rules of the AI law, at the very least on transparency and copyright. In addition, if R1 is designated as a systemic risk model, the possibility of reproducing similar results in several new models in Europe could lead to a flourishing of systemic risk models. This scenario was not provided by European co-legislators when the AI Act was negotiated, because the hypothesis was still that the upper level would only be represented by a handful of providers.
In any case, this scenario might be the most beneficial for American companies, which would always be competitive and would be on a playground with the R1 and EU of Deepseek companies. They would also have the additional advantage of participating in the current writing of the Code of Practice detailing how to comply with the requirements of the AI Act for models. The European Union Mistral would also benefit from a first engine advantage, but not from the many EU startups which could rely on these innovations, because they mainly separate from the process.
Conclusion
The novelty introduced by R1 creates both new concerns and incredible opportunities for Europe in AI’s space. Although it is not yet clear if and to what extent the AI AI law will apply, it always poses many problems of confidentiality, security and security.
To alleviate security and security problems, the best option in Europe is to designate R1 as a full GPAI model, as described above in scenario 2. ACT rules, at least on transparency and copyright. European companies must already comply with the GDPR and generally integrate the governance practices of responsible AI and security measures in their AI products. In accordance with transparency and copyright rules should not make too many additional obstacles for them. The costs and calculation efficiency that R1 showed the current opportunities for the European companies of AI to be much more competitive than what seemed possible a year ago, perhaps even more competitive than R1 himself on the EU market. EU models could not only be as effective and precise as R1, but also more reliable by consumers on confidentiality, security and security issues. It may be the best of both worlds, but European officials and companies will have to sail on a complex to come.
Laura Caroli is a main member of the Wadhwani AI Center at the Center for Strategic and International Studies in Washington, DC
This report is made possible by general support for the CSIs. No direct sponsorship contributed to this report.