The U.S. Department of Homeland Security (DHS) has released recommendations describing how to safely develop and deploy artificial intelligence (AI) in critical infrastructure. The recommendations apply to all actors in the AI supply chain, starting with cloud and compute infrastructure providers, through AI developers, and down to AI owners and operators. critical infrastructure. Recommendations for civil society organizations and the public sector are also provided.
The voluntary recommendations of the “Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure” examine each of the roles in five key areas: securing environments, driving responsible design of models and systems, implementing governance of data, ensure safe and secure deployment, and monitor performance. and impact. There are also technical and procedural recommendations to improve the safety, security and reliability of AI systems.
AI is already being used for resilience and risk mitigation across sectors, DHS said in a statement, such as AI applications for earthquake detection, power grid stabilization, and triage. mail.
The framework examines the responsibilities of each role:
-
Cloud and IT infrastructure providers must audit their hardware and software supply chain, implement rigorous access management, and protect the physical security of data centers powering AI systems. The framework also contains recommendations on supporting customers and downstream processes by monitoring abnormal activities and establishing clear processes for reporting suspicious and harmful activities.
-
AI developers should take a safe-by-design approach, assess dangerous capabilities of AI models, and “ensure model alignment with human-centered values.” The framework further encourages AI developers to implement strong privacy practices; conduct assessments that test for possible biases, failure modes and vulnerabilities; and support independent assessments for models that pose increased risks to critical infrastructure systems and their consumers.
-
Critical infrastructure owners and operators should deploy AI systems securely, including maintaining strong cybersecurity practices that consider AI risks, protecting customer data when fine-tuning AI products, and ensuring transparency meaningful regarding their use of AI to provide goods, services or benefits to the public.
-
Civil societyincluding universities, research institutions and consumer advocates engaged on AI safety and security issues, should continue to work on developing standards alongside government and industry, as well as ‘to conduct research on AI assessments that consider critical infrastructure use cases.
-
Public sector Entities, including federal, state, local, tribal, and territorial governments, should advance standards of practice for AI safety and security through statutory and regulatory measures.
“The framework, if widely adopted, will go a long way toward better ensuring the safety and security of essential services that provide clean water, consistent electricity, internet access and much more,” the secretary said of DHS, Alejandro N. Mayorkas, in a statement.
The DHS framework provides a model of shared and distinct responsibilities for the safe and secure use of AI in critical infrastructure. It also builds on existing risk frameworks to allow entities to assess whether the use of AI for certain systems or applications carries serious risks that could cause harm.
“We want the framework to be, frankly, a living document and also evolve as developments in the industry evolve,” Mayorkas said on a media call.