Wedoany.com Report on Mar 14th, NVIDIA and Palantir Technologies recently announced a collaboration to jointly advance sovereign AI technology. Announced ahead of NVIDIA's upcoming GTC event, this partnership aims to provide AI solutions for corporate and government clients with data sovereignty requirements.
The AI Operating System Reference Architecture (AIOS-RA) launched by Palantir is based on NVIDIA's enterprise reference architecture, targeting customers who need to handle latency-sensitive workflows, possess existing GPU architectures, and have extensive geographic distribution. The platform runs on eight NVIDIA Blackwell Ultra GPUs and utilizes NVIDIA Spectrum-X Ethernet to support AI inference and training. The technology stack also includes NVIDIA AI Enterprise, CUDA-X libraries, Nemotron open models, and Magnum IO.
On Palantir's side, AIOS-RA provides a unified management platform, integrating the Rubix zero-trust Kubernetes platform, Apollo autonomous deployment and lifecycle management services, and the enterprise AI-centric AIP suite. The overall computing structure encompasses Foundry services for functions such as cataloging and building.
Through its collaboration with Palantir, NVIDIA further expands its sovereign AI footprint. Previously, NVIDIA had reached similar agreements with Orange Business, India's NxtGen, and the UK government. Concurrently, Palantir signed an agreement with Accenture to assist UK-headquartered infrastructure provider Sovereign AI in providing sovereign foundations for Europe's commercial and government sectors.
In a recent interview, Surya Mukherjee, Head of European Technology Research at Accenture, defined AI sovereignty as a concept covering the entire technology stack. He noted: "Where is the AI making decisions, what data is it using, and what does it produce? So that's one level. Then consider the security of the model itself at a granular level: which country produces it, where is it produced?"
Mukherjee cited Stanford AI Index research data, showing that 70% of leading large language models (LLMs) are made in the US, and 25% in China. "That means 95% of global models are not produced in Europe," Mukherjee said. "As nations and companies, this requires careful consideration."
Justin Boitano, Vice President of Enterprise AI Platform at NVIDIA, commented: "AI is redefining the infrastructure stack – demanding, latency-sensitive, and data-sovereign environments require a full-stack architecture – from chips to systems to software. By combining Palantir's sovereign AI Operating System Reference Architecture with NVIDIA AI infrastructure, industries and nations can quickly, efficiently, and trustworthily transform data into intelligence."
Akshay Krishnaswamy, Chief Architect at Palantir, added: "From our first deployment with the US government to every deployment since, our software has had to meet requirements in the most complex and sensitive environments, and the customer must maintain control. Partnering with NVIDIA – and building on the existing investments of many customers – we are proud to deliver a fully integrated AI operating system, optimized for NVIDIA accelerated computing infrastructure, enabling customers to realize the promise of on-premises, edge, and sovereign cloud deployments."









