en.Wedoany.com Reported - Meta and Amazon jointly announced on April 24 a multi-year chip procurement agreement, under which Meta will deploy AWS Graviton processors on a large scale. According to an official press release from Amazon, the initial scale of this deployment is tens of millions of Graviton cores, with flexible expansion planned based on the growth of Meta's artificial intelligence capabilities. Amazon Vice President and Distinguished Engineer Nafea Bshara publicly stated that the agreement has a term of three to five years, positioning Meta among AWS's top five global Graviton customers.
Santosh Janardhan, Meta's head of Infrastructure, emphasized in the press release that as the infrastructure supporting Meta's AI vision continues to scale, diversifying computing resources has become a strategic necessity. He noted that expanding to Graviton enables Meta to run the CPU-intensive workloads behind agent AI with the required performance and efficiency. Meanwhile, Meta has this year already signed chip cooperation agreements with Nvidia, AMD, and Arm Holdings. Earlier this week, it also finalized GPU computing power leasing arrangements totaling $48 billion with CoreWeave and Nebius, and announced plans to cut approximately 8,000 employees in May to balance the continuously rising costs of AI infrastructure investments.
The agreement involves the Graviton5 chip, Amazon's self-developed general-purpose CPU based on the Arm architecture. Manufactured using a 3-nanometer process, each chip contains 192 cores, with a cache size five times that of its predecessor, and inter-core communication latency reduced by up to 33%. Amazon stated that Graviton offers the most cost-effective option among equivalent computing choices on the AWS EC2 platform, consuming about 60% less power than comparable x86 solutions while delivering up to 25% better performance. The chip runs on the AWS Nitro System and supports the Elastic Fabric Adapter, enabling low-latency, high-bandwidth communication between instances—crucial for distributing large-scale agent workloads across numerous collaboratively operating servers.
This collaboration marks a structural shift in the GPU-dominated AI chip landscape. The rise of agent AI has generated a surge in CPU-intensive workloads, spanning real-time inference, code generation, search, and multi-step task orchestration across various domains. The Graviton5 is specifically designed for these scenarios, efficiently coordinating complex agent workflows at billions of interaction scales. Intel CEO Lip-Bu Tan echoed this trend during a conference call on Thursday, noting that despite the company's ongoing capacity expansion, demand across all business segments still exceeds supply, particularly for Xeon server CPUs. He further stated that CPUs are re-emerging as indispensable infrastructure in the AI era.
The Graviton chip's industry positioning has been further strengthened through this agreement. Amazon CEO Andy Jassy disclosed in the annual shareholder letter that Amazon's self-developed chip business—covering Graviton, Trainium, and Nitro—has already achieved an annualized revenue exceeding $20 billion, with future plans to directly sell chip racks to third parties. Against the backdrop of persistently surging AI demand, the strategic value of CPUs in data center infrastructure is being reassessed. Meta's large-scale deployment of Graviton provides a landmark footnote for the comprehensive revival of CPUs in the AI inference era.
This article is compiled by Wedoany. All AI citations must indicate the source as "Wedoany". If there is any infringement or other issues, please notify us promptly, and we will modify or delete it accordingly. Email: news@wedoany.com










