While the full automatic increase will take a little more time to develop, the agent has obtained security and devops 90% of the path. He promises to considerably reduce the burden and the operational work that our teams are traditionally faced with the closure of tickets and the problem solving. Instead of reacting to the risks one by one and looking for individual fixes, the security teams can take advantage of the agentic AI to analyze all the available tools, assess their impact and their efficiency, and implement high impact changes which reduce both risk exposure and backlog. It is a change in mentality: to ask “how many risks do we have?” For “How many risks can we eliminate with one tool?” While various AI agents specialize and excel in their individual field, one analyzing the deep cause, another clearing and correlation of results, another assessing commercial impact and a fourth alignment of verification policy, they can also work in collaboration as a system. The agents share the context and the ideas with each other to identify the most effective correction paths. Risk sanitation is not a challenge that humans can take up alone. Fortunately, we don’t have to face everything without help. The agency AI can open the way to safety teams to effectively manage risks exposure in complex, hybrid and multi-cludy environments.Snir Ben Shimol, co-founder and CEO, Zest SecurityThe SC Media Perspectives columns are written by a community of confidence of experts in cybersecurity in the SC media. Each contribution aims to provide a unique voice to important cybersecurity subjects. The content strives to be of the highest quality, objective and non -commercial.
Comment: Imagine a chef’s cuisine, where each tool has a specific objective: a cast iron pan for perfect entry, a wok for sautéed, a sashimi knife for precise sushi cuts. The agentic AI provides the same level of specialization to security, equipping teams of dedicated AI agents, each designed to excel in a specific task. The agentic AI has been applied today in a wide range of safety use cases, improving the efficiency between safety operations as well as safety architecture and engineering, recovery teams to obtain greater results. For example, specialized agents can independently help hunting threats, to surface the emerging threats more quickly than traditional methods, the generation of secure code by design and the drafting or validation of policies. (The SC Media Perspectives columns are written by a community of confidence of experts in cybersecurity in the SC media. Read more perspectives here.)) Agents of AI and agent AIs, their various applications and how to secure them will be at best, will be a hot topic for the next few days at this year’s RSA conference, with many sessions dedicated to the subject. Discussions will cover the potential of agency al, real world applications, emerging threats and higher risks.From media to implementWhile the agent AI goes from the concept to the implementation, the spotlights turn to the place where the rubber meets the road: the safety challenges of the real world. Among its many promising applications, one of the most impactful is the safety of the cloud, the acceleration of the way the teams prioritize and solve security risks. Cloud security teams are exceeded, drowning in an endless backlog of cloud configuration errors, political violations and vulnerabilities. The vulnerability management book is obsolete. The remediation process does not adapt. Safety and DevOps teams always validate the results, analyzing the deep causes, prioritize according to the real commercial context, implementing fixes, QA to test – it never ends. Attackers, armed with AI, now move faster than ever, identifying and operating the weaknesses in a few days. Meanwhile, repairing critical vulnerability often takes months, involving long quests just to avoid breaking something else. And with the next audit of imminent compliance, leadership must ask: “Why do we still have the same vulnerabilities?” The volume of risks, coupled with the lack of automation, leaves safety teams and DevOps in search of unique tickets without the possibility of implementing remedy strategies and new policies that approach a wider crowd and prevented wealthy future. The agentic AI could change that.Use of agentic AI for cloud safety The agentic AI promises to transform the operation of security teams and, in the context of security risk management, allows them to break the long manual remediation cycle and implement a proactive, evolving and reproducible program. There are a few practical ways for the teams to apply to the AI agent:
Prioritization of complementary risks to the context: Evaluate several factors such as exploitability, execution, Internet exposure, existing mitigating controls and business criticality.
Remediation impact simulation: Simulate the implementation of fixes, package updates, IACs and code fixes to identify solutions with high impact and low effort.
Analysis of deep causes: PincOint the origin of a problem, going up it to the assets, the lines of code, the IAC tool and the owner of DevOps.
Code generation: Generate replacement code in IAC depending on the organization’s infrastructure and policies.
Identify attenuation options: Analyze cloud security services and railings to detect available mitigation measures to reduce risk severity.