FuriosaAI, an emerging leader in the AI semiconductor space, announced the unveiling of RNGD (pronounced “Renegade”), a leading AI accelerator, at Hot Chips 2024. RNGD is positioned to be the most efficient data center accelerator for high-performance large language model (LLM) and multimodal model inference, disrupting an AI hardware landscape long defined by legacy chipmakers and high-profile startups. Founded in 2017 by three engineers with backgrounds at AMD, Qualcomm, and Samsung, the company has pursued a strategy focused on rapid innovation and product delivery which has resulted in the unveiling and fast development of RNGD.
Furiosa successfully completed the full bring-up of RNGD after receiving the first silicon samples from their partner, TSMC. This achievement reinforces the company’s track record of fast and seamless technology development. With their first-generation chip, introduced in 2021, Furiosa submitted their first MLPerf benchmark results within 3 weeks of receiving silicon and achieved a 113% performance increase in the next submission through compiler enhancements.
Early testing of RNGD has revealed promising results with large language models such as GPT-J and Llama 3.1. A single RNGD PCIe card delivers 2,000 to 3,000 tokens per second throughput performance (depending on context length) for models with around 10 billion parameters.
“The launch of RNGD is the result of years of innovation, leading to a one-shot silicon success and exceptionally rapid bring-up process. RNGD is a sustainable and accessible AI computing solution that meets the industry’s real-world needs for inference,” said June Paik, Co-Founder and CEO of FuriosaAI. “With our hardware now starting to run LLMs at high performance, we’re entering an exciting phase of continuous advancement. I am incredibly proud and grateful to the team for their hard work and continuous dedication.”
Also Read: PassiveLogic Advances Swift AI Compiler to Deliver Superior Energy Efficiency Over Industry Giants
June will present performance benchmarks at Hot Chips today in a presentation titled, “Furiosa RNGD: A Tensor Contraction Processor for Sustainable AI Computing” which further underscores RNGD’s exceptional capabilities, leaving industry experts eagerly anticipating what comes next. He will offer a first hands-on look at the fully functioning RNGD card along with a live demo at the Furiosa booth.
RNGD’s key innovations include:
- A non-matmul, Tensor Contraction Processor (TCP) based architecture that enables a perfect balance of efficiency, programmability and performance.
- Programmability through a robust compiler co-designed to be optimized for TCP that treats entire models as single-fused operations.
- Efficiency, with a TDP of 150W compared to 1000W+ for leading GPUs
- High-performance, with 48GB of HBM3 memory delivering the ability to run models like Llama 3.1 8B efficiently on a single card.
What our industry partners have to say:
“The Furiosa RNGD AI Inference solution drives the adoption of green computing with Supermicro. By integrating Furiosa’s technology, Supermicro systems can reduce power consumption per card while still delivering exceptional inference performance,” said Vik Malyala, SVP, Technology and AI; President and Managing Director, EMEA of Supermicro.
“The collaboration between GUC and FuriosaAI to deliver RNGD with exceptional performance and power efficiency hinges on meticulous planning and execution. Achieving this requires a deep understanding of modern AI software and hardware. FuriosaAI has consistently demonstrated excellence from design to delivery, creating the most efficient AI inference chips in the industry,” said Aditya Raina, CMO of GUC.
Source: PRNewswire