/dqc/media/media_files/2025/10/27/micron-unveils-192gb-socamm2-to-power-next-gen-ai-data-centre-efficiency-2025-10-27-11-06-58.png)
Micron unveils 192GB SOCAMM2 to power next-gen AI data centre efficiency
Micron Technology, a memory and storage solutions company, has announced customer sampling of its 192GB SOCAMM2 (Small Outline Compression Attached Memory Module), the latest innovation designed to bring low-power, high-performance memory to the next generation of AI data centres.
The SOCAMM2 builds upon Micron’s first-to-market LPDRAM SOCAMM, delivering 50% more capacity in the same compact footprint. Designed using Micron’s 1-gamma DRAM process technology, the new module achieves over 20% power efficiency improvement and reduces time to first token (TTFT) by up to 80% in real-time inference workloads.
This advancement comes at a time when AI data centres are under increasing pressure to balance performance with sustainability. With each rack housing more than 40 terabytes of CPU-attached DRAM, the SOCAMM2’s compact, modular architecture directly supports power optimisation, scalability, and ease of maintenance.
“As AI workloads become more complex and demanding, data centre servers must deliver more tokens per watt,” said Raj Narasimhan, Senior Vice President and General Manager, Cloud Memory Business Unit, Micron Technology.
“Micron’s leadership in low-power DRAM ensures our SOCAMM2 modules provide the throughput, energy efficiency, and reliability essential to powering the next generation of AI servers.”
A New Standard for AI Memory Innovation
The SOCAMM2 leverages Micron’s deep engineering expertise to combine LPDDR5X’s inherent low power consumption and high bandwidth with data centre-class durability and reliability. Originally designed for mobile environments, Micron’s low-power DRAM has been re-engineered to meet the stringent requirements of high-performance computing and AI workloads.
Building on a five-year collaboration with NVIDIA, Micron has pioneered the transition of low-power memory architectures into server-grade AI systems, delivering high-speed data throughput essential for large-context AI training and inference.
Designed for Scalability and Serviceability
SOCAMM2’s modular architecture and compact design, one-third the size of equivalent RDIMMs, dramatically improves data centre footprint optimisation. Its stacked architecture and liquid cooling–friendly layout also make it an ideal fit for dense AI server deployments.
The module delivers:
Up to 192GB capacity per module
Speeds up to 9.6 Gbps
Power efficiency improvements exceeding 67% over RDIMMs
Enhanced serviceability and modular scalability for future upgrades
These innovations enable operators to achieve higher performance-per-watt metrics, reducing operational costs while advancing sustainability goals in large-scale AI environments.
Driving Industry Standards and Ecosystem Growth
Micron has played a leading role in defining the JEDEC SOCAMM2 specification, working closely with ecosystem partners to accelerate low-power adoption across AI and cloud data centres. The company aims to standardise efficient, modular DRAM architectures to enhance global power efficiency and AI scalability.
Customer sampling of SOCAMM2 is already underway, with high-volume production aligned to OEM launch schedules later this year.
SOCAMM2 represents more than a memory upgrade; it’s a reimagining of how AI data centres balance performance, power, and precision.
ReadMore:
Inside Shadow Escape: The Zero-Click AI attack that changes everything
How enterprise cloud is transforming in India with AI-native innovation?
Dell empowering Indian SMBs with AI powered design and security
US tariff shock reshapes India’s IT strategy: how disruption is driving resilience
/dqc/media/agency_attachments/3bO5lX4bneNNijz3HbB7.jpg)
Follow Us