Friday, November 22, 2024

Rambus Boosts AI Performance with 9.6 Gbps HBM3 Memory Controller IP

Related stories

Capgemini, Mistral AI & Microsoft Boost Generative AI

Capgemini announced a global expansion of its Intelligent App...

Rackspace Launches Adaptive Cloud Manager for Growth

Rackspace Technology®, a leading hybrid, multicloud, and AI technology...

Theatro Launches GENiusAI to Boost Frontline Productivity

Theatro, a pioneer in voice-controlled mobile communication technology, is...

Denodo 9.1 Boosts AI & Data Lakehouse Performance

Latest release adds an AI-powered assistant, an SDK to...

Health Catalyst Launches AI Cyber Protection for Healthcare

Health Catalyst, Inc., a leading provider of data and...
spot_imgspot_img

Rambus Inc., a premier chip and silicon IP provider making data faster and safer, announced that the Rambus HBM3 Memory Controller IP now delivers up to 9.6 Gigabits per second (Gbps) performance supporting the continued evolution of the HBM3 standard. With a 50% increase over the HBM3 Gen1 data rate of 6.4 Gbps, the Rambus HBM3 Memory Controller can enable a total memory throughput of over 1.2 Terabytes per second (TB/s) for training of recommender systems, generative AI and other demanding data center workloads.

“HBM3 is the memory of choice for AI/ML training, with large language models requiring the constant advancement of high-performance memory technologies,” said Neeraj Paliwal, general manager of Silicon IP at Rambus. “Thanks to Rambus innovation and engineering excellence, we’re delivering the industry’s leading-edge performance of 9.6 Gbps in our HBM3 Memory Controller IP.”

Also Read: Edge Computing Demystified: What You Need to Know

“HBM is a crucial memory technology for faster, more efficient processing of large AI training and inferencing sets, such as those used for generative AI,” said Soo-Kyoum Kim, vice president, memory semiconductors at IDC. “It is critical that HBM IP providers like Rambus continually advance performance to enable leading-edge AI accelerators that meet the demanding requirements of the market.”

HBM uses an innovative 2.5D/3D architecture which offers a high memory bandwidth and low power consumption solution for AI accelerators. With excellent latency and a compact footprint, it has become a leading choice for AI training hardware.

The Rambus HBM3 Memory Controller IP is designed for use in applications requiring high memory throughput, low latency and full programmability. The Controller is a modular, highly configurable solution that can be tailored to each customer’s unique requirements for size and performance. Rambus provides integration and validation of the HBM3 Controller with the customer’s choice of third-party HBM3 PHY.

SOURCE: BusinessWire

Subscribe

- Never miss a story with notifications


    Latest stories

    spot_img