The growing influence of artificial intelligence (AI) has many organizations that rush to respond to new cybersecurity and data confidentiality problems created by technology, especially because AI is used in cloud systems. Apple tackles AI’s security and confidentiality with its private cloud calculation system (CCC).
Apple seems to have solved the problem of the offer of cloud services without undermining the confidentiality of users or adding additional insecurity layers. He had to do so, because Apple had to create a cloud infrastructure on which to execute generative models of AI (Genai) which need more power of treatment than its devices could not provide while protecting the confidentiality of users, said that a Computerworld article.
Apple opens the PCC system to security researchers to “find out more about the CCP and carry out their own independent verification of our claims”, the Announced company. In addition, Apple also extends its Apple security bonus.
What does that mean for AI security in the future? Security Intelligence was maintained with Ruben Boonen, the development of CNE capacities in IBM, to find out what researchers think of the PCC and Apple approach.
Yes: Computerworld has reported this story, saying that Apple hopes that “the energy of the whole community of Infosec will combine to help build a gap to protect the future of AI”. What do you think of this decision?
Boonen: I read the ComputerWorld article and examined Apple’s own statements on their private cloud. I think what Apple did here is good. I think it goes beyond what other cloud suppliers do because Apple provides an overview of some of the internal components they use and essentially say to the security community, you can take a look at this and see if it is secure or not.
As good from a point of view as AI is constantly developing as an industry. Leading AI Generative Components in regular consumption devices and bring people to trust their data with AI services is a very good step.
Yes: What do you see as the pros of Apple’s approach to secure AI in the cloud?
Boonen: Other cloud suppliers provide high security guarantees for data stored on their cloud. Many companies, including IBM, trust their business data to these cloud suppliers. But often, the processes to secure data is not visible for their customers; They don’t explain exactly what they are doing. The biggest difference here is that Apple provides this transparent environment for users to test this plane.
Explore cybersecurity solutions AI
Yes: What are the drawbacks?
Boonen: Currently, the most competent AI models are very large, which makes them very useful. But when we want AI on general public devices, there is a tendency to suppliers to ship small models that cannot answer all questions, so this is based on the larger models of the cloud. This has an additional risk. But I think it is inevitable for the whole industry to this cloud model for AI. Apple is implementing this now because they want to trust consumers in the AI process.
If: Apple’s system does not play well with other systems and products. How will Apple efforts to secure AI in the cloud will benefit other systems?
Boonen: They provide a design model than other suppliers like Microsoft, Google and Amazon can then reproduce. I think it is above all effective as an example for other suppliers to say that we should perhaps implement something similar and provide test capacities similar to our customers. So I don’t think this has a direct impact on other suppliers, except to push them to be more transparent in their processes.
It is also important to mention the Apple bug bonus while inviting researchers to look at their system. Apple has a history of not very well with security, and there have been cases in the past where they have refused to pay bonuses for the problems found by the safety community. I am therefore not sure that they do it entirely out of interest to attract researchers, but also in part to convince her of their customers that they do things safely.
That being said, after reading their design documentation, which is extended, I think they are doing a fairly good job to combat security around AI in the cloud.