Put people first: new measures in the EU
In March, the The EU approved a Act of artificial intelligence To ensure security, fundamental human rights and AI innovation. The law prohibits specific applications which threaten human rights, such as the use of biometrics to “classify” people, build facial recognition databases from the Internet and video surveillance images, and the use of AI for social notation, the predictive police or human manipulation.
Which was followed in December by the EU Cyber Resilience Actwhich requires manufacturers of digital products, software developers, importers, distributors and resellers for the design of cybersecurity functionalities such as incident management, data protection and the management of updates and fixes. Product manufacturers must also address all vulnerabilities as soon as they identify. Violations can cause high -cost sanctions and sanctions.
Also in December, the EU Update of its Product Responsibility Directive (PLD) Include software – unlike other jurisdictions such as the United States that does not see the software as a “product”. This makes software companies responsible for damage if their solutions contain defects that cause damage, including, by involvement, AI models.
Born in the United States: AI regulation
The rear half of the year was occupied at the federal level in the United States, the White House having published its very first national security memorandum on AI in October. The note asked for “concrete and impactful stages” for:
- Ensure American leadership in the development of a safe and reliable AI
- Advance American national security with AI
- Conduct international agreements on the use and governance of AI
In November, the National Institute of Standards and Technology (NIST) formed a working group – AI risk test for national security (trains)—To deal with the implications of national security and public security of the AI. The members of the trains represent the departments of defense, energy and internal security as well as the National Institutes of Health and will facilitate the evaluation and the coordinated tests of the models of AI in the fields of national security such as radiological, nuclear, chemical and biological security, cybersecurity, etc.
Still in November, the trade and state departments designed the International Network of AI Security Institutes For the first time, focusing on the risks of synthetic content, foundation model tests and advanced assessment of AI risks.
Through the Ecuador: Ai Regs in Latin America
Most countries in Latin America have taken measures to deal with the risks of AI while adopting its potential. According to White and caseBrazil and Chile are among those who have the most detailed proposals while others, like Argentina and Mexico, have occurred more generally on the issue. Some focus on mitigation of risks, either by prohibitions or regulatory constraints, while others see the opportunity to adopt a freer approach that invites innovation and international investment.
Know your enemy: AI and cyber risk
To regulate AI, it is important to know what the risks are really. In 2024, Owasp, put and others devoted themselves to the task of identifying and detailing the vulnerabilities of the AI.
Owasp Toppers LLM
The Open Worldwide Application Security Project (OWASP) has unveiled its 2025 TOP 10 of the list of risks for LLM. Back, there are old chestnuts such as rapid injection risks, supply chain risks and poor outing manipulation. The new additions include the vector and the integration of weaknesses, disinformation and unlimited consumption (an eruption in the previous risk category).
Owasp has widened its concerns about “an excessive agency” largely due to the increase in semi-autonomous agetic architectures. As Owasp says, “with LLM acting as agents or in plug-in parameters, uncontrolled authorizations can lead to involuntary or risky actions, which makes this entry more critical than ever.”