April 10, 2025 – Nvidia collaborate with Google Cloud To bring an agentic AI to companies seeking to use the Google Gemini family of AI models by using the NVIDIA Blackwell HGX and DGX and NVIDIA Confidential Computing for Data Safety.
With the NVIDIA Blackwell platform on Google Distributed Cloud, local data centers can remain aligned with regulatory requirements and data sovereignty laws by locking access to sensitive information, such as patient files, financial transactions and classified government information. NVIDIA Confidential Computing also secures sensitive code in Gemini models from unauthorized access and data leaks.
“By providing our Gemini models in premises with the revolutionary performance of Nvidia Blackwell and confidential IT capacities, we allow companies to unlock the full potential of AI Agency,” said Sachin Gupta, vice-president and general manager of infrastructure and solutions at Google Cloud. “This collaboration helps to ensure that customers can innovate in complete safety without compromising performance or operational facility.”
Confidential IT with NVIDIA Blackwell provides companies with technical insurance that their user invites to the application programming interface of Gemini models – as well as the data they have used for fine adjustment – remain safe and cannot be visualized or modified.
At the same time, model owners can protect themselves against unauthorized access or falsification, providing double -layer protection that allows companies to innovate with Gemini models while retaining data confidentiality.
AI agents lead new corporate applications
This new offer comes when the AI agent transforms corporate technology, offering more advanced problem solving capacities. Unlike the AI models that perceive or generate according to the knowledge learned, agentic AI systems can reason, adapt and make decisions in dynamic environments. For example, in the Company’s IT support, while a knowledge -based AI model can recover and present troubleshooting guides, an agenic AI system can diagnose problems, execute fixes and increase complex problems independently.
Similarly, in finance, a traditional AI model could point out potentially fraudulent transactions based on models, but an agenic AI system could go even further by investigating anomalies and taking proactive measures such as blocking transactions before they occur or adjust the rules of detection of fraud in real time.
The dilemma on site
Although many can already use models with multimodal reasoning – integrate text, images, code and other data types to solve complex problems and create cloud -based agent applications – those with strict security or data sovereignty requirements have not yet been able to do so.
With this announcement, Google Cloud will be one of the first cloud service providers to offer confidential IT capacities to secure AI original workloads in all environments – whether cloud or hybrid.
Powered by the NVIDIA HGX B200 platform with Blackwell GPUS and NVIDIA Confidential Computing, this solution will allow customers to protect AI models and data. This allows users to achieve revolutionary performance and energy efficiency without compromising data security or the integrity of the model.
AI observability and security for the AI agency
The agency’s AI scaling in production requires robust observability and safety to guarantee reliable performance and compliance.
Google Cloud has announced a new GKE inference gateway built to optimize the deployment of AI inference workloads with advanced routing and scalability. Integrating with Nvidia Triton Inference Server and Nvidia Nemo Guard-Guards, it offers intelligent load balancing that improves performance and reduces service costs while allowing the security and governance of centralized models.
For the future, Google Cloud is working to improve the observability of AI original workloads by integrating Nvidia Dynamo, an open source library designed to serve and set up AI models in AI factories.
HAS Google Cloud followingAttending the special address of Nvidia, exploring sessions, seeing the demos and talking to the experts of Nvidia.
Source: Anne Hecht, Nvidia