Wedoany.com Report-Nov. 20, Nvidia (NVDA.O) has begun transitioning its artificial intelligence server platforms from traditional DDR5 memory to LPDDR low-power memory modules—the type commonly used in smartphones and tablets—in order to reduce overall system power consumption, according to a research report released Wednesday by Counterpoint Research.
The logo of technology company Nvidia is seen at its headquarters in Santa Clara, California February 11, 2015.
The shift is creating new pressure on global memory supply chains. Although the industry has recently experienced shortages of older-generation DRAM products, Counterpoint warns that Nvidia’s adoption of LPDDR at server scale will generate demand equivalent to that of a leading smartphone manufacturer, but concentrated in the data-center segment.
Each AI server requires significantly more memory modules than a mobile device, meaning the same production lines that supply hundreds of millions of handsets annually will now need to satisfy an additional high-volume customer with different technical specifications and quality requirements.
Major memory manufacturers—including Samsung Electronics (005930.KS), SK Hynix (000660.KS), and Micron Technology (MU.O)—had previously reduced output of commodity DRAM and LPDDR to prioritize high-bandwidth memory (HBM) production for AI accelerators. Counterpoint indicates suppliers now face difficult decisions about whether to reallocate capacity back toward LPDDR to accommodate Nvidia’s requirements.
"The bigger risk on the horizon is with advanced memory, as Nvidia’s recent pivot to LPDDR means they're a customer on the scale of a major smartphone maker - a seismic shift for the supply chain which can’t easily absorb this scale of demand," Counterpoint stated in the report.
The research firm forecasts that server-grade memory module prices could double by the end of 2026 as a direct result of the increased competition for LPDDR capacity. Overall DRAM pricing across all segments is projected to rise approximately 50% from current levels through the second quarter of 2026.
Higher memory costs would add to the already substantial expenses faced by cloud service providers and enterprises building large-scale AI infrastructure, where graphics processors, power systems, and cooling already represent the largest budget items.
Counterpoint noted that memory suppliers are likely to respond by expanding LPDDR production lines during 2025 and 2026, but lead times for new fabrication capacity typically exceed 18 months, limiting near-term relief.
The report was published hours before Nvidia’s scheduled release of quarterly financial results on Wednesday, during which investors and analysts are expected to seek additional details on the company’s memory strategy and its impact on future data-center product roadmaps.
Industry sources indicate that initial systems using LPDDR memory are already in qualification with select hyperscale customers, with broader commercial availability expected during the second half of 2025.
The development highlights the cascading effects of rapid AI infrastructure growth on established semiconductor supply chains, as components originally optimized for mobile devices are increasingly repurposed for high-performance computing applications.









