en.Wedoany.com Reported - U.S. artificial intelligence company Anthropic officially announced in San Francisco on May 6, 2026, local time, that it has signed a large-scale computing power agreement with SpaceX, the U.S. space exploration technology company owned by Elon Musk. Anthropic will reserve the full computing capacity of SpaceX's AI supercomputer "Colossus 1," located in Memphis, Tennessee. Earlier that morning, Musk posted on the social media platform X, stating that after a lengthy meeting with Anthropic's executive team recently, he had a "very good" impression of the company.
Under the agreement, this deal will bring Anthropic over 300 megawatts of new computing capacity within the month, equivalent to accessing more than 220,000 Nvidia GPUs. This cluster of accelerators densely deploys multiple models, including H100, H200, and the next-generation GB200, making it one of the world's largest and fastest-deployed AI supercomputers to date. Colossus 1 was originally built by AI company xAI and used for training its large language model Grok. Following the official merger of SpaceX and xAI in February 2026, xAI's model training tasks were entirely relocated to Colossus 2, making Colossus 1 available for external lease.
With the computing power secured, Anthropic simultaneously unbundled restrictions on the usage side. The five-hour rate limit for its AI programming tool, Claude Code, was doubled across Pro, Max, Team, and per-seat enterprise plans; peak-time throttling previously exclusive to Pro and Max accounts was removed entirely; and the rate limits for the Claude Opus model API were significantly increased, with the maximum input tokens per minute for Tier 3 users, for example, raised to 16 times the original level. The service experience for Claude Pro and Max paid subscribers also received a direct capacity expansion.
Anthropic has embedded this transaction within a larger computing power puzzle. Just one month prior, the company reached a multi-year, multi-gigawatt next-generation TPU supply agreement with Google and Broadcom, expected to gradually come online starting in 2027. Simultaneously, its long-term computing partnership with Amazon was further expanded to 5 gigawatts, supported by Trainium2 and subsequent Trainium3 chips. Combined with the 300-megawatt GPU cluster introduced from SpaceX this time, Anthropic's cumulative computing power reserves have exceeded 11 gigawatts, and a multi-architecture supply system spanning GPUs, TPUs, and custom-designed chips is gradually taking shape.
This collaboration also planted a more futuristic seed at its conclusion. Anthropic explicitly expressed interest in further cooperation with SpaceX to jointly develop gigawatt-level orbital AI computing resources. Although space-based data centers currently face engineering challenges such as launch costs, thermal control, and radiation protection, SpaceX, leveraging the launch frequency of Falcon 9 and Starship along with its constellation operation experience, is pushing this concept into the engineering discussion phase. Neither announcement disclosed the financial terms or contract duration of this deal.
This article is compiled by Wedoany. All AI citations must indicate the source as "Wedoany". If there is any infringement or other issues, please notify us promptly, and we will modify or delete it accordingly. Email: news@wedoany.com










