Monday, May 20, 2024

Groq® is Selected to Provide Access to World’s Fastest AI Inference Engine for the National AI Research Resource (NAIRR) Pilot

Related stories

Avant Technologies to Revolutionize Data Center Management with Proprietary AI Software Platform

Avant Technologies, Inc., an artificial intelligence (AI) technology company...

ScribeAmerica Continues to Evolve Speke AI Clinical Documentation Offering

ScribeAmerica has announced continued evolution and significant usage growth of...

Trinity Biotech Announces Strategic Collaboration with Medical Artificial Intelligence Company PulseAI

Trinity Biotech plc a commercial stage biotechnology company focused...
spot_imgspot_img

Real-time Inference Leader Joins Elite Group Offering U.S.-based Researchers and Educators Access to Cutting-edge AI Technologies, Powering Responsible AI Innovation 

Groq®, the leader in real-time AI inference, announced its participation in the National Artificial Intelligence Research Resource (NAIRR) Pilot  The Pilot, a U.S. National Science Foundation-led program, marks the first step towards creating a shared national research infrastructure to connect U.S. researchers and educators to responsible and trustworthy AI research resources. In collaboration with 13 federal agencies and 25 private sector, nonprofit, and philanthropic organizations, Groq is powering the next phase of responsible AI research, discovery, and innovation by providing access to its LPU Inference Engine – the only solution delivering real-time AI inference today – via GroqCloud.

“Groq was founded, in part, to end the ‘haves and have-nots’ in AI,” said Groq Public Sector President Aileen Black. “Lack of access to necessary resources should never prevent a researcher from succeeding at the next Operation Warp Speed. It is an honor to provide the next generation of AI innovators with the real-time inference needed to run text-based applications and other AI workloads at scale.”

With GroqCloud, researchers can leverage leading open-source Large Language Models (LLMs) from providers like META, Google, and Mistral, who have built leading AI models according to industry benchmarks and human evaluation. The LPU Inference Engine makes it easy to conduct research, as well as test and deploy new generative AI applications and other AI workloads because it delivers 10x the speed while consuming just 1/10th the energy of comparable systems using GPUs for inference. Researchers can also access the Groq technology via the ALCF Argonne Leadership Compute Facility, which includes a GroqRack compute cluster that provides an extensible accelerator network of 9 GroqNode™ servers with a rotational multi-node network topology.

Also Read: Oracle Database 23ai Brings the Power of AI to Enterprise Data and Applications

“Inference plays an increasingly pivotal role in the AI ecosystem of computing, data, software, and platforms researchers require to advance innovation and scientific discovery in a responsible manner for the country. Groq’s contribution to the NAIRR Pilot will enable researchers to access leading models, helping to realize their boldest research visions,” said Katie Antypas, director of the Office of Advanced Cyberinfrastructure at the U.S. National Science Foundation.

The LPU Inference Engine is a new processing system developed to handle computationally intensive applications with sequential components, such as LLMs, audio, control systems, network observance, and more. While CPUs and GPUs excel in tasks related to data input and model training, they encounter challenges in executing at-scale inference in ultra-low, real-time workloads. Sub-optimal latency, throughput, power consumption, and a sequential processing nature adversely impact their effectiveness. Groq addressed these limitations when designing the LPU to ensure repeatable ultra-low latency without hindering performance.

Source: PRNewsWire

Subscribe

- Never miss a story with notifications


    Latest stories

    spot_img