Site icon AIT365

Alphawave Semi Unlocks 1.2 TBps Connectivity for High-Performance Compute and AI Infrastructure with 9.2 Gbps HBM3E Subsystem

Alphawave

Robust, chiplet-enabled platform based on Micron HBM3E supports best-in-class performance and exceptional power efficiency

Alphawave Semi, a global leader in high-speed connectivity and compute silicon for the world’s technology infrastructure, announced a 9.2 Gbps HBM3E sub-system (PHY + Controller IP) silicon platform. Based on the company’s proven HBM3E IP, the platform takes chiplet-enabled memory bandwidth to new heights of 1.2 Terabytes per second (TBps), addressing the demand for ultra-high-speed connectivity in high-performance compute (HPC) and accelerated compute for generative artificial intelligence (AI) applications.

Alphawave Semi has successfully demonstrated its HBM3E IP subsystem at recent trade shows. Working with Micron, channel simulations have shown the highest level of performance of 9.2 Gbps in an advanced 2.5D package across an entire HBM3E system, comprising the HBM3E IP subsystem, an Alphawave Semi innovative silicon interposer and Micron’s HBM3E memory. These performance results demonstrate the platform enables a significant reduction in the time-to-market in delivering best-in-class industry performance and exceptional power efficiency for data center and HPC AI infrastructures.

HBM3E has emerged as a top choice for memory in AI and HPC systems as it offers the highest bandwidth, optimal latency, compact footprint and superior power efficiency. Alphawave Semi customers are deploying complete HBM subsystem solutions that integrate the company’s HBM PHY with a versatile JEDEC-compliant, highly configurable HBM controller that can be fine-tuned to maximize the efficiency for application-specific AI and HPC workloads.

Also Read: Alif Semiconductor Expands their Groundbreaking Ensemble family with E1C, a MCU that Packs 46 GOPs of Extreme Low-power On-chip AI/ML Processing Into Tiny Footprint

Alphawave Semi has also created an optimized silicon interposer design to achieve best-in-class results for signal integrity, power integrity and thermal performance at 9.2 Gbps.

“Micron is committed to advancing memory performance through our comprehensive AI product portfolio as the demand for AI continues to grow in data centers,” said Praveen Vaidyanathan, vice president and general manager of Micron’s Compute Products Group. “With their end-to-end channel simulations, we are pleased to see Alphawave Semi demonstrate a performance of 9.2 Gbps for AI accelerators with HBM3E. Together, we are empowering customers to accelerate system deployment with our Micron HBM3E solutions, delivering up to 30% lower power consumption compared to competitive offerings.”

“We are excited to lead the industry in delivering a complete 9.2 Gbps HBM3E PHY and controller chiplet-enabled platform delivering 1.2 TBps bandwidth, backed by channel simulations across the SoC, interposer and Micron’s HBM3E memory,” said Mohit Gupta, senior vice president and general manager, Custom Silicon and IP, Alphawave Semi. “Leveraging this, along with our leading-edge custom silicon and advanced 2.5D/3D packaging capabilities, allows Alphawave Semi to a provide a very significant time-to-market advantage to our hyperscaler and data center infrastructure customers for their AI-enabled systems.”

Source: Businesswire

Exit mobile version