The pace of AI development continues to accelerate, but many organizations fail to apply basic security measures to their models and tools, according to new research from Orca Security.
On Wednesday, the cloud security supplier published on Wednesday the “IA security report of 2024” which detailed the alarmed risks and the security gaps in the models and tools of AI. Orca researchers have compiled the report by analyzing the data of cloud assets on AWS, Azure, Google Cloud, Oracle Cloud and Alibaba Cloud.
The report revealed that, although the use of AI has increased among organizations, many do not deploy the tools safely, which is worrying. For example, Orca warned that organizations have trouble deactivating risky fault parameters that could allow attackers to acquire root access, deploy packages with vulnerabilities that actors threaten could exploit or expose it without knowing it sensitive code.
This is the last report highlighting the current security risks with the rapid adoption of the AI. Last month, Veracode also warned that developers put security in second position when it comes to using AI to write code. Now Orca has highlighted the way problems continue to grow within companies.
While 56% of organizations deploy their own AI models for collaboration and automation, a large number of software packages they use contain at least one CVE.
“Most vulnerabilities are at low risk – for the moment. (Sixty -two percent) of organizations have deployed an AI package with at least one CVE. Most of these vulnerabilities are an average risk with a Average CVSS score of 6.9, and only 0.2 % of vulnerabilities have a public feat (compared to 2.5 %means), “Orca wrote in the report.
Configurations and controls without security
Orca noted that Zeure Openai was the organization of AI services most frequently used to create personalized applications, but there are concerns. The report indicated that 27% of organizations did not configure the Azure Openai accounts with private termination points, which could allow attackers to “access, intercept or manipulate the data transmitted between cloud resources and AI services”.
The report highlighted an important problem with the default parameters for Amazon Sagemaker, an automatic learning service that organizations use to develop and deploy AI models in the cloud. Deactivating risky fault parameters in general is a massive problem with which organizations are faced when it comes to taking advantage of AI tools and platforms in commercial environments.
The default parameters of AI services tend to promote development speed rather than security, which results in most organizations using insecure default parameters.
Orca security“Safety report of the state of the AI 2024”
“The default parameters of AI services tend to promote development speed rather than security, which results in most organizations using unsecured default parameters. For example, 45% of buckets Sagemaker Amazon use non -randomized default bucket names, and 98% of organizations have not deactivated the default root access for Amazon Sagemaker Carnet bodies, “according to the report.
Orca warned that an attacker could use root access to obtain privileged access to perform an action on the asset. Another problem with Amazon Sagemaker, which extends to all cloud suppliers included in the report, is that organizations do not use self -managed encryption keys.
Another problem reported in the report implied a lack of protection of encryption. For example, 98% of organizations using Google Vertex had not activated encryption at rest for their self -managed keys. Although the report noted that certain organizations may have encrypted their data by other means, he warned that the risks were significant. “This leaves the sensitive data exposed to attackers, increasing the chances that a bad actor can exfiltrate, delete or modify the AI model,” wrote Orca.
The report also highlighted the security risks associated with AI platforms such as Openai and Hugging Face. For example, Orca noted that 20% of organizations using OpenAi have an exposed access key and 35% of companies have a key access to the exposed face.
WIZ researchers have also proven how vulnerable the face of the hugs is in the research presented during Black Hat USA 2024 last month. The researchers demonstrated how they had been able to compromise the AI platform and have access to sensitive data.
Vulnerabilities are just a problem on Orca security highlighted in a new AI security risk report.
Check the default settings
The co-founder and CEO of Orca, Gil Geron, spoke with editorial techtarget of problems linked to the rapid adoption of AI and the lack of security. “The roles and responsibilities related to the use of this type of technologies are not placed in stone or clear. This is why we note an increase in the use of these tools, but the risks are increasing in terms of access, obtaining data and vulnerabilities “,” He said.
Geron added that it is important for security practitioners to recognize risks, define policies and implement the limits in order to keep up with the rate of rapid increase in the adoption of AI. He stressed that the security problem requires the participation of the engineer and the security practitioner on the sides of an organization.
Geron also said that security challenges are not entirely new, although the tools and platforms are. Each technology starts very open until the risks is traced, he said. Currently, the default parameters are very permissive, which makes tools and platforms easy to use, but the opening also creates security problems.
For the moment, it said, it is difficult to say if the deep cause is due to organizations that put security in second place to deployment, or that technological companies must do more to protect the tools, the models and data sets.
“The fact that there is no line defined between your responsibility in the use of technology and the supplier’s responsibility carries out this notion,” oh, it is probably secure because it is supplied by Google ” “Said Geron. “But they cannot control how you use it, and they cannot control if you strengthen your models on the internal data that you should not have exposed. They give you the technology, but how you always use it Your responsibility.
It is also difficult to know if suppliers modifying the default parameters would even help. Geron said that using AI is still experimental and that providers generally await market comments. “It makes it difficult to reset or change something you don’t know how it will be used,” he said.
Geron has urged organizations to check the default settings to ensure that projects and tools are secure, and he recommended limiting authorizations and access.
“And the last but not the least is a pure hygiene of your network, such as isolation and separation, which are all good practices for security, but are even more important with these types of services,” He declared.
Arielle Waldman is news editor for editorial techtarget covering business security.