LAS VEGAS – WIZ researchers have warned that IA infrastructure providers such as embrace and replica are sensitive to new attacks and must improve their defenses to protect data from sensitive users.
On Wednesday, during Black Hat USA 2024, Wiz Hillai Ben-Sasson and Sagi Tzadik security researchers conducted a session that extended one year’s research they conducted in three of the main IA infrastructure providers: hugging the face, replica and core IA SAP. The researchers tested if they could enter the main AI platforms and studied how easy it would be for attackers to access confidential data.
The research objective was to assess the safety of these platforms and determine the potential risks of storing precious data in one of the three main AI platforms. While the new technology of AI has taken off, cybercriminals and nation-state players have targeted third-party suppliers and platforms that house sensitive data and training models.
Hugging Face, an automatic learning platform where users create models and store data sets, have experienced a recent attack. In June, the platform detected a suspicious activity in its space platform, which required a reset of key and token.
During the session, the researchers showed how they compromised platforms by downloading malicious models and using container escapement techniques to get out of their tenant and move laterally through the service. In a April blog arteWIZ researchers described how they were able to compromise the embraced face and obtain crossed access to data and training models from other customers. The cloud security supplier later published research on similar issues with Replicat and SAP AI Core, and they demonstrated attack techniques during the session on Wednesday.
Before Black Hat, Ben-Sasson, Tzadik and Friend Luttwak, CTO and co-founder of Wiz, spoke with Techtarget Editorial of the session and lessons learned from research. In the three cases, the researchers were able to hack the face, the reply and the sap to access the data of sensitive customers.
“We have accessed millions of artifacts of confidential AI such as models, data sets, the code – a unique intellectual property that can go for millions of dollars,” said Ben -Sasson.
Luttwak said many AI service providers use containers as barriers between different customers. But he pointed out that these containers can be bypassed in many ways. For example, Luttwak said that container services are subject to configuration errors.
“There are all kinds of outgoing vulnerabilities that will allow people to get around these barriers. We believe that containerization is not a sufficiently safe barrier between tenants or tenants,” said Luttwak.
Once the researchers discovered that they could hack the platforms, they reported the problems with each service provider. Sasson applauded the embraced face, replica and sap for their disclosure responses. He said they were collaborative and professional, and Wiz researchers worked closely with their respective security teams to solve problems.
Although vulnerabilities and weaknesses have been discussed by providers, WIZ researchers recommended that organizations adapt their threat models accordingly to take into account the compromises on potential data. As for platforms, researchers have urged AI service providers to improve their isolation and sand standards to prevent threat actors from going to other tenants and moving laterally in the platforms.
Adoption and rapid AI risks
In addition to the three platforms requiring an increase in defenses such as improving sand and isolation standards, researchers have also discussed global problems associated with the rapid adoption of AI. They stressed that security is a reflection afterwards with regard to AI.
“AI security is also infrastructure safety because AI is very trendy and very new, and few people understand what IA security is really,” said Luttwak.
Luttwak has added that organizations testing AI models at the moment often do not do things correctly because security teams do not understand all infrastructure parts. This includes dozens, even hundreds of unknown tools, which create more security problems. This is a huge challenge for security teams because everyone wants to use AI tools, he said. Consequently, teams use possible resources, including open source tools, while security becomes a secondary concern.
“These tools are not built with security and that means it puts each company in danger,” said Luttwak. “It is simply a question of ensuring that when you use models (and) when you use open source tools which are linked to AI, (you) carry out a safety validation of reasonable diligence for them. If we can prove it to IA service provider companies where it is their main business, you can imagine a company that is not even so big.”
During another Black Hat session on Wednesday, Chris Wysopal, CTO and co-founder at Veracode, discussed how developers are using more and more large-scale language models but often put security in second position. He has listed several concerns, including generative AI tools of the data set reproducing existing vulnerabilities.
Arielle Waldman is a journalist based in Boston covering business security news.