Tuesday, November 5, 2024

Rambus Boosts AI Performance with 9.6 Gbps HBM3 Memory Controller IP

Related stories

Crayon Joins AWS Generative AI Partner Innovation Alliance

Crayon announced it will work with Amazon Web Services...

Sikich announced the appointment of Ray Beste as Principal AI Strategist

Sikich, a Chicago-based leading global technology-enabled professional services company,...

Wondershare Unveils SelfyzAI 3.0: New AI Features Enhance Image Editing Experience

Wondershare proudly launched SelfyzAI 3.0, the latest version of...

Dan Muscatello Joins OneSix as Chief Revenue Officer

OneSix, a leading data and artificial intelligence (AI) consultancy...
spot_imgspot_img

Rambus Inc., a premier chip and silicon IP provider making data faster and safer, announced that the Rambus HBM3 Memory Controller IP now delivers up to 9.6 Gigabits per second (Gbps) performance supporting the continued evolution of the HBM3 standard. With a 50% increase over the HBM3 Gen1 data rate of 6.4 Gbps, the Rambus HBM3 Memory Controller can enable a total memory throughput of over 1.2 Terabytes per second (TB/s) for training of recommender systems, generative AI and other demanding data center workloads.

“HBM3 is the memory of choice for AI/ML training, with large language models requiring the constant advancement of high-performance memory technologies,” said Neeraj Paliwal, general manager of Silicon IP at Rambus. “Thanks to Rambus innovation and engineering excellence, we’re delivering the industry’s leading-edge performance of 9.6 Gbps in our HBM3 Memory Controller IP.”

Also Read: Edge Computing Demystified: What You Need to Know

“HBM is a crucial memory technology for faster, more efficient processing of large AI training and inferencing sets, such as those used for generative AI,” said Soo-Kyoum Kim, vice president, memory semiconductors at IDC. “It is critical that HBM IP providers like Rambus continually advance performance to enable leading-edge AI accelerators that meet the demanding requirements of the market.”

HBM uses an innovative 2.5D/3D architecture which offers a high memory bandwidth and low power consumption solution for AI accelerators. With excellent latency and a compact footprint, it has become a leading choice for AI training hardware.

The Rambus HBM3 Memory Controller IP is designed for use in applications requiring high memory throughput, low latency and full programmability. The Controller is a modular, highly configurable solution that can be tailored to each customer’s unique requirements for size and performance. Rambus provides integration and validation of the HBM3 Controller with the customer’s choice of third-party HBM3 PHY.

SOURCE: BusinessWire

Subscribe

- Never miss a story with notifications


    Latest stories

    spot_img