en.Wedoany.com Reported - NVIDIA and Google Cloud announced the deepening of their decade-long strategic partnership on April 22 at the Google Cloud Next 2026 conference, fully integrating NVIDIA's accelerated computing stack with Google's AI supercomputing infrastructure. Google Cloud has formally expanded its AI supercomputing architecture, adding computing instances powered by the NVIDIA Grace Blackwell system, and announced the upcoming A5X instance based on the NVIDIA Vera Rubin platform, providing customers with a unified path from AI experimentation to large-scale deployment.
Google also released a data center network architecture called Virgo Networking, designed specifically for hyperscale AI workloads. Serving as the backbone of Google's AI supercomputing infrastructure, Virgo will enable the Vera Rubin A5X instance to scale to 960,000 graphics processing units across sites. Google has already deployed over one million NVIDIA GPUs globally across its fleet, covering current mainstream models such as H100, H200, B200, GB200, GB300, L4, A100, and RTX PRO 6000, to support internal products and Google Cloud customer services.
The NVIDIA Omniverse library and the open-source Isaac Sim robot simulation framework are now available on the Google Cloud Marketplace. Developers can directly build physically accurate digital twins in the Google Cloud environment, develop custom robot simulation pipelines, and complete training, simulation, and validation before real-world deployment. The NVIDIA NIM microservices have also been deployed to Google Enterprise Agent Platform and Google Kubernetes Engine, supporting the operation of models like Cosmos Reason 2, enabling customers to move beyond chatbot applications toward agents capable of autonomous planning, execution, and interaction with the physical world.
The collaboration covers three deployment environments: cloud, on-premises, and edge. In the cloud, Google Enterprise Agent Platform, GKE, and DGX Cloud all integrate NVIDIA GPUs; on-premises and at the edge, Google Distributed Cloud based on NVIDIA Blackwell extends the unified platform to customer-owned data centers. NVIDIA software libraries and frameworks such as CUDA, cuDNN, Dynamo, NeMo, and Nemotron are integrated with Google Cloud services and reference architectures. Vertex AI, GKE, and Cloud Run all provide native auto-scaling and observability support for NVIDIA GPUs.
Through this partnership, Google Cloud has solidified its market position as a neutral acceleration platform. Customers can invoke the Gemini model or the NVIDIA Nemotron open model on Vertex, both of which are optimized for NVIDIA hardware. For customers, this means no longer needing to piece together GPUs, schedulers, and frameworks manually; the co-engineered integrated stack is close to a turnkey operational state. Taking advantage of Google Cloud's multi-tenant showcase platform, NVIDIA strengthens its industry position as the default AI hardware choice, while gaining deep feedback on enterprise workloads through tight integration with Vertex, GKE, and Cloud Run.
This article is compiled by Wedoany. All AI citations must indicate the source as "Wedoany". If there is any infringement or other issues, please notify us promptly, and we will modify or delete it accordingly. Email: news@wedoany.com










