In order Safeguarding the vital infrastructure of the country, the federal government now offers a game book to help companies sail in the treacherous landscape of cybersecurity, including the dangers of brewing that hide in artificial intelligence.
THE recommendationsissued by the Cybersecurity and infrastructure safety agencyHighlight the need for improved guarantees, as AI is increasingly integrated into essential sectors such as energy, transport and health care. Experts in the field closely examine the directives, offering additional information and recommendations to strengthen the country’s defenses against disturbances and attacks related to potential AI.
“AI systems are vulnerable to hackers mainly because they are software applications built by engineers”, ” Chase Cunninghamvice-president of the security market study at G2said Pymnts. “They can host faults in their source code, often incorporate open source components with their own vulnerabilities and generally operate on cloud infrastructure, which, despite their advances, remain sensitive to security threats.
AI is not only a threat but also also revolutionizes how security Teams fight cyber players, rationalizing their processes to biggest Speed and efficiency. By analyzing extensive data and detecting complex models, AI automates the initial phases of the incident survey. These innovative methods allow security professionals to start their tasks with a complete understanding of the situation, accelerating response times.
A growing threat
The guidelines highlight a complete approach, urging operators to understand the dependencies of AI providers and the use of AI. They also argue owners of critical infrastructure to establish protocols to report IA security threats and assess AI systems for vulnerabilities regularly.
The directives describe opportunities within the AI in operational awareness, customer service automation, physical security and forecasts. However, the document also warns the potential AI risks for critical infrastructure, including it compatible AI attacks, targeting AI systems and possible defects in the design and execution of AI which could cause dysfunctions or unforeseen repercussions.
“On the basis of the expertise of the CISA as a national coordinator for the security and resilience of critical infrastructure, DHS guidelines are the transversal analysis of the AI specific risk agency for the critical infrastructure sectors and will serve as a key tool to help owners and operators to mitigate the risk of infrastructure”, CISA Director, Director of CISA Jen is in a statement.
The rise of AI caused both new methods of attack and chances of more deceptive piracy tactics, Schellman CEO Avani Desai said Pymnts. For example, there was an increase in highly automated and effective phishing campaigns and other black hat applications. In addition, AI has raised concerns concerning legitimate property and the appropriate use of intellectual property.
“Because AI must be formed On large data sets to be effective, and many sources may include (personally identifiable information (PII)), medical information and other sensitive and potentially private information, generating AI users can also enter sensitive information in these tools, which raises confidentiality problems, “said Desai.
Some experts say that the new federal directives are not far enough. Cyber-defenses must be much more collaborative, Asaf KochanPresident and co-founder of Sentinelsaid a cybersecurity company, said Pymnts.
“It means everyone has to do their part,” he said. “Companies working in critical infrastructure must take measures to protect themselves and their customers from AI cybercrime, which means that they should adopt full security solutions that can keep the pace of threats generated by AI and This race on modern equipment. »»
Keep the infrastructure in safety
To improve the safety of AICompanies should focus on critical defenses such as rigorous tests of open source components, the implementation of the code signing and employment Software hardware (SBOM) and verification of the provenance, KODEM SECURITY CEO Aviv Mussinger said Pymnts. Continuous monitoring of vulnerabilities is essential to avoid potential security threats and ensure robust protection for AI systems.
“The increase in the code generated by the AI transforms the development of software, requiring more agile and integrated security measures,” he said.
To stay safe in the digital world to the rapid rate of today, organizations can maintain their SBOMS and vulnerability exploitation (VEX) lists using DevSecops, Mussinger said. This means that they can ensure safety and conformity while following the rate of rapid development. This approach also meets the challenges posed by AI, offering in -depth protection in a constantly evolving threat environment.
“AI systems must be designed With security in mind, “he said. “An AI system which is secured by the design reduces the risk of downstream threats once the system is built and running. Secure by design principles must be incorporated into each development life cycle phase. This is a better practice for the development of Any critical mission system and not just AI systems. »»
For all Pymnts AI covers, subscribe to everyday Newsletter Ai.