en.Wedoany.com Reported - On April 19, 2026, SK Hynix issued an official statement announcing the commencement of mass production for its next-generation memory module, the 192GB SOCAMM2, specifically designed for NVIDIA's Vera Rubin platform. Justin Kim, Head of AI Infrastructure and Chief Marketing Officer at SK Hynix, stated in the announcement that by supplying the 192GB SOCAMM2 product, SK Hynix has established a new standard for AI memory performance. According to the statement, the company has secured a stable mass production system in advance to meet the demands of global cloud service provider customers.
The SOCAMM2 is built on the 1cnm process, the sixth-generation 10-nanometer class LPDDR5X low-power DRAM, adapting low-power memory traditionally used primarily in mobile products like smartphones for server environments. This module employs a compression connection design, featuring a slim form factor and high scalability. The compression connector enhances signal integrity and facilitates easy module replacement. SK Hynix emphasized that the 1cnm process SOCAMM2 can achieve over twice the bandwidth and more than a 75% improvement in power efficiency compared to traditional RDIMMs.
This product is specifically designed for NVIDIA's Vera Rubin platform. Vera Rubin is NVIDIA's next-generation computing platform for agent AI, unveiled at GTC 2026. The Rubin GPU utilizes TSMC's 3nm process, integrates 336 billion transistors, and is equipped with 288GB of HBM4 memory, delivering a bandwidth of 22TB/s and an FP4 inference performance of 50 PFLOPS. The entire Vera Rubin NVL72 rack system supports 3.6 EFLOPS of inference performance, with a system memory bandwidth of 22TB/s, reducing the cost per token inference to one-tenth of the previous Blackwell platform. SK Hynix expects SOCAMM2 to fundamentally address memory bottlenecks in the training and inference of large language models with hundreds of billions of parameters.
SK Hynix points out that the AI market is shifting from inference to training, and SOCAMM2, which enables low-power operation of large language models, is gaining attention as the next-generation memory solution. The memory tray corresponding to a single Vera CPU in the NVIDIA Vera Rubin platform can provide approximately 1.5TB of LPDDR5X memory. SOCAMM2 serves as system memory, forming a complementary architecture with HBM4 graphics memory. Yonhap News Agency reported on April 20 that this module adapts low-power mobile memory for server environments, positioning it as the main memory solution for next-generation AI servers.
SOCAMM2 has been dubbed "second-generation HBM" by Korean media, filling the intermediate layer demand for "low-power, high-bandwidth, replaceable modules" between HBM and DDR5. Regarding the competitive landscape, Samsung Electronics resolved the SOCAMM2 warping issue in early April 2026 by applying low-temperature solder, reducing the welding process temperature from above 260°C to below 150°C, gaining an advantage in R&D and mass production progress. Micron delivered the world's first 256GB SOCAMM2 customer samples in March 2026, offering approximately 33% more capacity than the 192GB version. SK Hynix stated it will work closely with NVIDIA to solve AI infrastructure bottlenecks and solidify its position as the most trusted AI memory solution provider.
This article is compiled by Wedoany. All AI citations must indicate the source as "Wedoany". If there is any infringement or other issues, please notify us promptly, and we will modify or delete it accordingly. Email: news@wedoany.com









