Cybersecurity problems in 2024 can be summarized in two letters: AI (or five letters if you reduce it to the AI generation). Organizations are still in the early stages of understanding the risks and awards for this technology. For all the good he can do to improve data protection, follow compliance regulations and allow faster threat detection, threatening players also use AI to accelerate their social engineering attacks and models sabotage AI with malware.
The AI may have obtained the lion’s attention in 2024, but it was not the only cyber-man that organizations had to take care of. The identification flight continues to be problematic, with a 71% increase in annual shift In attacks using compromised identification information. The shortage of continuous skills, Cost of additional companies of an additional $ 1.76 million dollars In a burst of data. And as more and more companies are counting on the cloud, it should not be surprising that there was a peak in the intrusions of the cloud.
But there have been positive stages in cybersecurity in the past year. Cisa Program secured by design Signed more than 250 software manufacturers to improve their cybersecurity hygiene. The CISA also introduced its portal of cyber-incident reports to improve the way organizations share cyber-information.
Last year cybersecurity predictions have focused on AI and its impact on the way in which the security teams will work in the future. This year’s predictions also focus on AI, showing that cybersecurity may have reached a point where security and AI are interdependent on each other, for good and bad.
Here are this year’s predictions.
Shadow AI is everywhere (Akiba Saeedi, Vice-President, IBM Security Product Management)
Shadow Will prove to be more common – and risky – than we thought. Companies are increasingly AI Generative Models deployed on their systems every day, sometimes without their knowledge. In 2025, companies will really see the scope of “Shadow IA” – unauthorized AI models used by staff who are not properly governed. Shadow AI presents a major risk for data security, and companies that have managed to face this problem in 2025 will use a mixture of clear governance policies, complete workforce and detection and response diligent .
Identity transformation (Wes Gyure, Executive Director, IBM Security Product Management)
How companies think of identity will continue to transform themselves in the wake of hybrid cloud and applications modernization initiatives. Recognizing that identity has become the new security perimeter, companies will continue to transition to an identity strategy focused on identity, managing and securing access to applications and critical data, including Gen IA models. In 2025, a fundamental component of this strategy consists in building an effective identity fabric, an integrated set of identity and identity services. Once well, it will be a welcome relief to security professionals, by taming chaos and the risks caused by a proliferation of multicloud environments and dispersed identity solutions.
Explore cybersecurity services
Everyone must work together to manage threats (Sam Hector, leader in global strategy, IBM Security)
Cybersecurity The teams will no longer be able to effectively manage threats in isolation. The threats of the generative adoption of the AI and the hybrid cloud evolve rapidly. In the meantime, the risk quantum calculation The poses to modern standards for the encryption of public keys will become inevitable. Given the maturation of new quantum security cryptography standards, there will be motivation to discover encrypted assets and accelerate the modernization of cryptography management. Next year, successful organizations will be those in which executives and various teams develop and jointly apply cybersecurity strategies, integrating security in organizational culture.
Prepare for post-health cryptography standards (Ray Harishankar, IBM Fellow, IBM Quantum Safe)
While organizations are starting the transition to post-health cryptography over the next year, agility will be crucial to ensure that systems are prepared for continuous transformation, in particular when the US National Institute standards and technology (NIST) continues to extend its post-quantum tool box Cryptography Standards. The initial standards of NIST post-quantum cryptography have been a signal for the world that the time is now to start the trip to become quantum. But just as important is the need for cryptographic agility, ensuring that systems can quickly adapt to new cryptographic and algorithms in response to the evolution of threats, technological advances and vulnerabilities. Ideally, automation will ration and accelerate the process.
The data will become a vital part of the security of the AI (SUJA Viswesan, vice-president of the development of security software, IBM)
Data and security of the AI will become an essential ingredient of A trustworthy. “Trusted AI” is often interpreted as an AI which is transparent, fair and protective of confidentiality. These are critical characteristics. But if AI and food data are not also secure, then all other characteristics are compromised. In 2025, while businesses, governments and individuals interact with AI more often and with higher issues, AI data and security will be considered an even more important part of the recipe of confidence in AI.
The organizations will continue to learn the juxtaposition of the advantages and threats of AI (Mark Hughes, World General Partner, Cybersecurity Services, IBM)
As AI leads to the proof of concept to large -scale deployment, companies benefit from the advantages of productivity and efficiency gains, including automation of security and conformity tasks to protect their data and their assets. But organizations must be aware that AI is used as a new tool or a new tool for threats To violate security processes and longtime protocols. Companies must adopt security executives, recommendations for best practices and railing for AI and adapt quickly – to meet both the advantages and risks associated with rapid AI progress.
Better understanding of threats assisted by AI AI AI (Troy Bettencourt, global partner and head of IBM X-Force)
Protect against AI assisted threats; Plan threats fueled by AI. There is a distinction between threats fueled by AI and AI assisted threats, including the way organizations should think of their proactive security posture. The attacks fueled by AI, like Deepfake video scams, have been limited to date; Today’s threats remain mainly assisted by AI – which means that AI can help threaten the actors to create malware or a better phishing Lar with e-mail. To approach current threats assisted by AI, organizations should prioritize the implementation of end -to -end security for their own AI solutions, including the protection of user interfaces, APIs, language models and Automatic learning operationsWhile remaining aware of the strategies to defend against future attacks fueled by AI.
There is a very clear message of these forecasts that understanding how AI can help and harm an organization is essential to ensure that your business and its assets are protected in 2025 and beyond.