On October 24, 2024, President Biden published the first National security memorandum (NSM) on artificial intelligence (AI), carrying out another directive (paragraph 4.8) set out in administration Executive Decree on AI And describing how the federal government intends to approach AI national security policy. The NSM also includes a classified annex, which deals with sensitive national security problems. The publication of the NSM follows the other recent actions focused on the national security of the Biden on AI, including the Ministry of Commerce proposed rule To institute compulsory declaration requirements for powerful AI model developers (see our legal update on the proposal) and its Provisional final rule Take up new export controls on advanced semiconductor manufacturing equipment, among other technologies (see our legal update on the final rule).
The development of the NSM is based on the fundamental premise according to which “progress on the border of AI will have important implications for national security and foreign policy in the near future”.1 In this spirit, the NSM directs several measures to be taken by the federal government to: (1) ensure that the United States leads the global development of a safe, secure and trustworthy AI; (2) Use advanced advanced AI technologies to advance the National Security Mission of the United States; and (3) Advance international consensus and governance around AI. Although the NSM focuses on the actions to be taken by the federal government, it promises to have important implications for private sector entities as they develop and deploy powerful AI models.
In this legal update, we summarize the main provisions and directives of the NSM.
Summary of the national security memorandum
The NSM provides three main objectives and corresponding directives concerning AI and national security.
1 and 1 Direct the global development of a safe, safe and trustworthy AI: To maintain and extend American leadership in the development of AI, the NSM identifies key policies, in particular: promotion of progress and competition in the development of AI; Protect industry, civil society, the academic world and related infrastructure against threats of foreign intelligence; and the development of technical and political tools to treat the potential safety, safety and reliability risks posed by AI. The key directives of this field include:
- The Ministry of State (DOS), the Ministry of Defense (DOD) and the Ministry of Internal Security (DHS) will use all the judicial authorities available to attract and facilitate the immigration of foreigners with relevant technical expertise that would improve American competitiveness in AI and related areas.
- Several agencies – including the Ministry of Commerce (DOC), the DOD and the Ministry of Energy (DOE) – coordinate their efforts, their plans, their investments and their policies to facilitate and encourage the development of sophisticated semiconductors of AI, infrastructurers dedicated to AI, computer infrastructure, links of power transmission, EG, clean energy.
- The office of the National Intelligence Director (ODNI), in coordination with other agencies, must identify critical nodes in the AI supply chain and develop a list of ways that these nodes could be disturbed or compromised by foreign players. These agencies must take measures to reduce these risks.
- The Committee of Foreign Investments in the United States (CFIUS) “must, if necessary, determine whether a covered transaction implies access to foreign players to proprietary information on AI training techniques, algorithmic improvements, material advances, critical technical artefacts (CTAS), or other property prospects.
- Doc, acting through the IA Security Institute (AISI) and the National Institute of Standards and Technology (NIST), will serve as the main contact point for the federal government with the developers of the private sector to facilitate the voluntary tests of the double -use foundation models. Doc must establish an ability to direct these tests and issue advice and references for AI developers on how to test, assess and manage the risks arising from these models. AISI must submit a report to the President summarizing the conclusions of his voluntary tests and shares the results with the developers of these models.
- The National Security Agency (NSA) “will develop the capacity to carry out rapid systematic classified tests of the capacity of AI models to detect, generate and / or exacerbate offensive cyber-menices (,)” and the DOE will also be the case with regard to “nuclear and radiological risks”.
- The DOE, DHS and AISI will have to coordinate to develop a roadmap for the capacity evaluations of AI models “to generate or exacerbate deliberate chemical and biological threats (.)” Doe must develop a pilot program to establish the ability to carry out classified tests in this area and other agencies must support efforts to use AI to improve biosefe and biosecurity.
- DOD, DHS, the Federal Bureau of Investigation, and the NSA “will publish non -classified directives concerning vulnerabilities and known threats to AI cybersecurity; Best practices to avoid, detect and alleviate these problems during the training and deployment of the model; and the integration of AI into other software systems. »»
2. To further integrate AI into American national security functions, the NSM identifies key policies, including adaptation partnerships, policies and infrastructure to allow effective and responsible use of AI; and the development of solid AI governance and risk management policies. The key directives of this field include:
- The DOD and the ODNI establish a working group to resolve the problems involving the purchase of the AI by the elements of the DOD and the Intelligence Community (IC). The working group must provide recommendations to the Federal Acquisition Regulatory Council (FARC) concerning changes in existing regulations and orientations, in order to accelerate and simplify the AI supply process.
- The DOD and the ODNI are involved with the stakeholders in the private sector, including technology and defense companies, to identify and understand the emerging capacities of the AI.
- Agency leaders must monitor, assess and mitigate risks directly linked to the development and use of AI from their agency, including risks linked to physical security, confidentiality, discrimination and bias, transparency, responsibility and performance.
- Agency manager who uses AI within the framework of a national security system (NSS) emit or update the directives on AI governance and risk management for NSS.
3 and 3 Promote a landscape of international governance of stable AI, responsible and worldwide: The US international commitment on AI “supports and facilitates improvements in the security, security and reliability of AI systems in the world; Promote democratic values, in particular respect for human rights, civil rights, civil freedoms, privacy and security; prevent the abusive use of AI in national security contexts; And promote equitable access to the advantages of AI. »To this end:
- The State Department, in coordination with other agencies, “will produce a strategy for the progress of international standards for AI governance according to the values of safe, secure and trustworthy, and democratic values, including human rights, civil rights, civil freedoms and privacy.”
Conclusion
The scope of the NSM is not limited to the implementation of AI in the context of national security. He also considers a large AI supply chain, in particular not only semiconductors and IT equipment, but also energy and electricity production – and the effects of AI for commercial use as being vital for American national security. In the spirit of this framing, the NSM has important implications not only for AI developers and defense entrepreneurs, but also in other sectors such as energy and infrastructure. In addition, the NSM clearly indicates that the Federal National Security Policy for AI is likely to involve a wide range of questions in the coming years, including subjects as diverse as immigration, foreign investment, federal research, public-private collaboration, government contraction and security of the supply chain.
1 See the White House Information Sheet on the NSM.