Artificial intelligence has quickly become the cornerstone of modern business, driving innovation and efficiency across all industries. Yet, as businesses increasingly rely on AI tools to manage sensitive tasks, they also expose themselves to new security vulnerabilities.
Companies integrating AI into their operations means AI entities become more autonomous and have access to more sensitive data and systems. As a result, CISOs are faced with new cybersecurity challenges. Traditional security practices, designed for human users and conventional machines, are not enough when applied to AI. It is therefore essential that businesses address emerging vulnerabilities if they want to avoid security issues due to uncontrolled AI integration and secure their most valuable data.
Offensive Research Evangelist, CyberArk Labs.
AI: much more than just machines
Each identity type has a different role and capabilities. Humans generally know how to best protect their passwords. For example, it seems obvious to everyone to avoid reusing the same password several times or to choose one that is very easy to guess. Machines, including servers and computers, often hold or manage passwords, but they are vulnerable to breaches and lack the ability to prevent unauthorized access.
AI entities including chatbotsare difficult to classify with regard to cybersecurity. These non-human identities manage business-critical passwords, but differ significantly from traditional machine identities such as software, devices, virtual machinesAPIs and robots. Thus, AI is neither a human identity nor a machine identity; he occupies a unique position. It combines human-guided learning with machine autonomy and requires access to other systems to function. However, he lacks the judgment to set boundaries and prevent the sharing of confidential information.
Increasing investments, lagging security
Businesses are investing heavily in AI, with 432,000 UK organizations – or 16% – reporting they have adopted at least one AI technology. AI adoption is no longer a trend; it’s a necessity, which is why spending on emerging technologies is only likely to continue to increase in the years to come. The UK AI market is currently worth over £16.8 billion and is expected to reach £801.6 billion by 2035.
However, rapid investment in AI often exceeds identity management security measures. Businesses don’t always understand the risks posed by AI. As such, following security best practices or investing sufficient time in securing AI systems is not always at the top of the priority list, leaving these systems vulnerable to potential cyberattacks. Additionally, traditional security practices such as access controls and least privilege rules are not easily applicable to AI systems. Another problem is that, with everything they already do, security practitioners struggle to find enough time to secure AI workloads.
CyberArk’s Identity Security Threat Report 2024 reveals that while 68% of UK organizations say up to half of their machine identities access sensitive data, only 35% include these identities in their definition of identity. privileged users and take the necessary identity security measures. This oversight is risky because AI systems, loaded with up-to-date training data, become high-value targets for attackers. Compromises in AI could lead to the disclosure of intellectual properties, financial information and other sensitive data.
The threat of cloud attacks on AI systems
Security threats to AI systems are not unique, but their scope and scale could be. Constantly updated with new training data from inside a company, LLM quickly become prime targets for attackers once deployed. Since they must use real data and not test data for training purposes, this up-to-date information can reveal valuable sensitive company secrets, financial data and other confidential assets. AI systems inherently trust the data they receive, making them particularly susceptible to being tricked into disclosing protected information.
In particular, cloud attacks against AI systems enable lateral movement and jailbreaking, allowing attackers to exploit a system’s vulnerabilities and trick it into releasing misinformation to the public. Cloud identity and account compromises are common, with many high-profile breaches resulting from credential theft causing significant damage to major brands in the technology, banking and consumer industries.
AI can also be used to carry out more complex cyberattacks. For example, it allows bad actors to analyze each permission linked to a particular role within an organization and evaluate whether they can use that permission to easily access and move around the organization.
So what is the reasonable next step? Companies are still at the beginning of integrating AI and LLMs, so it will take time to establish strong identity security practices. However, CISOs cannot afford to sit back; they must proactively develop strategies to protect AI identities before a cyberattack occurs or new regulation comes into effect requiring them to do so.
Key steps to strengthening AI security
While there is no silver bullet when it comes to AI security, there are steps businesses can put in place to mitigate risks. Specifically, there are some key steps CISOs can take to improve their AI identity security posture as the industry continues to evolve.
Identify overlap: CISOs should make it a priority to identify areas where existing identity security measures can be applied to AI. For example, leveraging existing controls such as access management and principles of least privilege where possible can help improve security.
Safeguarding the environment: It is essential that CISOs understand the environment in which AI operates to protect it as effectively as possible. While purchasing an AI security platform is not a necessity, it is vital to secure the environment in which AI activity takes place.
Build an AI security culture: It’s difficult to encourage all employees to adopt identity security best practices without a strong AI security mindset. Involving security experts in AI projects allows them to share their knowledge and expertise with everyone. employees and make sure everyone is well aware of the risks of using AI. It is also important to think about how data is processed and how the LLM is trained to encourage employees to think about what is involved in using emerging technologies and to be even more careful.
The use of AI in business presents both great opportunities and unprecedented security challenges. As we navigate this new landscape, it becomes clear that traditional security measures are insufficient in the face of the unique risks posed by AI systems. The role of CISOs is no longer simply to manage conventional cybersecurity threats; it’s now about recognizing the distinct nature of AI identities and securing them accordingly. Businesses must therefore ensure they invest time and resources to find the right balance between innovation and security to keep up with the latest trends while protecting their most valuable assets.
We’ve listed the best objectives and key results (OKR) software.
This article was produced as part of TechRadarPro’s Expert Insights channel, where we feature the best and brightest minds in today’s technology industry. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you would like to contribute, find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro